Improve Robustness of Reinforcement Learning against Observation Perturbations via l∞ Lipschitz Policy Networks

Authors

  • Buqing Nie MoE Key Lab of Artificial Intelligence and AI Institute, Shanghai Jiao Tong University
  • Jingtian Ji MoE Key Lab of Artificial Intelligence and AI Institute, Shanghai Jiao Tong University
  • Yangqing Fu MoE Key Lab of Artificial Intelligence and AI Institute, Shanghai Jiao Tong University
  • Yue Gao MoE Key Lab of Artificial Intelligence and AI Institute, Shanghai Jiao Tong University

DOI:

https://doi.org/10.1609/aaai.v38i13.29360

Keywords:

ML: Reinforcement Learning

Abstract

Deep Reinforcement Learning (DRL) has achieved remarkable advances in sequential decision tasks. However, recent works have revealed that DRL agents are susceptible to slight perturbations in observations. This vulnerability raises concerns regarding the effectiveness and robustness of deploying such agents in real-world applications. In this work, we propose a novel robust reinforcement learning method called SortRL, which improves the robustness of DRL policies against observation perturbations from the perspective of the network architecture. We employ a novel architecture for the policy network that incorporates global $l_\infty$ Lipschitz continuity and provide a convenient method to enhance policy robustness based on the output margin. Besides, a training framework is designed for SortRL, which solves given tasks while maintaining robustness against $l_\infty$ bounded perturbations on the observations. Several experiments are conducted to evaluate the effectiveness of our method, including classic control tasks and video games. The results demonstrate that SortRL achieves state-of-the-art robustness performance against different perturbation strength.

Published

2024-03-24

How to Cite

Nie, B., Ji, J., Fu, Y., & Gao, Y. (2024). Improve Robustness of Reinforcement Learning against Observation Perturbations via l∞ Lipschitz Policy Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 38(13), 14457-14465. https://doi.org/10.1609/aaai.v38i13.29360

Issue

Section

AAAI Technical Track on Machine Learning IV