Augmented Proximal Policy Optimization for Safe Reinforcement Learning

Authors

  • Juntao Dai Zhejiang University
  • Jiaming Ji Zhejiang University
  • Long Yang Peking University
  • Qian Zheng Zhejiang University
  • Gang Pan Zhejiang University

DOI:

https://doi.org/10.1609/aaai.v37i6.25888

Keywords:

ML: Reinforcement Learning Algorithms, PEAI: Safety, Robustness & Trustworthiness, PRS: Control of High-Dimensional Systems, PRS: Planning Under Uncertainty, PRS: Planning With Markov Models (MDPs, POMDPs), RU: Decision/Utility Theory

Abstract

Safe reinforcement learning considers practical scenarios that maximize the return while satisfying safety constraints. Current algorithms, which suffer from training oscillations or approximation errors, still struggle to update the policy efficiently with precise constraint satisfaction. In this article, we propose Augmented Proximal Policy Optimization (APPO), which augments the Lagrangian function of the primal constrained problem via attaching a quadratic deviation term. The constructed multiplier-penalty function dampens cost oscillation for stable convergence while being equivalent to the primal constrained problem to precisely control safety costs. APPO alternately updates the policy and the Lagrangian multiplier via solving the constructed augmented primal-dual problem, which can be easily implemented by any first-order optimizer. We apply our APPO methods in diverse safety-constrained tasks, setting a new state of the art compared with a comprehensive list of safe RL baselines. Extensive experiments verify the merits of our method in easy implementation, stable convergence, and precise cost control.

Downloads

Published

2023-06-26

How to Cite

Dai, J., Ji, J., Yang, L., Zheng, Q., & Pan, G. (2023). Augmented Proximal Policy Optimization for Safe Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(6), 7288-7295. https://doi.org/10.1609/aaai.v37i6.25888

Issue

Section

AAAI Technical Track on Machine Learning I