Gradient-Adaptive Pareto Optimization for Constrained Reinforcement Learning

Authors

  • Zixian Zhou Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS University of Chinese Academy of Sciences
  • Mengda Huang Institute of Computing Technology, Chinese Academy of Science
  • Feiyang Pan Huawei EI Innovation Lab
  • Jia He Huawei EI Innovation Lab
  • Xiang Ao Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS University of Chinese Academy of Sciences Institute of Intelligent Computing Technology, Suzhou, CAS
  • Dandan Tu Huawei EI Innovation Lab
  • Qing He Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS University of Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v37i9.26353

Keywords:

ML: Reinforcement Learning Algorithms, ML: Applications, ML: Deep Learning Theory, ML: Optimization, ML: Reinforcement Learning Theory

Abstract

Constrained Reinforcement Learning (CRL) burgeons broad interest in recent years, which pursues maximizing long-term returns while constraining costs. Although CRL can be cast as a multi-objective optimization problem, it is still facing the key challenge that gradient-based Pareto optimization methods tend to stick to known Pareto-optimal solutions even when they yield poor returns (e.g., the safest self-driving car that never moves) or violate the constraints (e.g., the record-breaking racer that crashes the car). In this paper, we propose Gradient-adaptive Constrained Policy Optimization (GCPO for short), a novel Pareto optimization method for CRL with two adaptive gradient recalibration techniques. First, to find Pareto-optimal solutions with balanced performance over all targets, we propose gradient rebalancing which forces the agent to improve more on under-optimized objectives at every policy iteration. Second, to guarantee that the cost constraints are satisfied, we propose gradient perturbation that can temporarily sacrifice the returns for costs. Experiments on the SafetyGym benchmarks show that our method consistently outperforms previous CRL methods in reward while satisfying the constraints.

Downloads

Published

2023-06-26

How to Cite

Zhou, Z., Huang, M., Pan, F., He, J., Ao, X., Tu, D., & He, Q. (2023). Gradient-Adaptive Pareto Optimization for Constrained Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(9), 11443-11451. https://doi.org/10.1609/aaai.v37i9.26353

Issue

Section

AAAI Technical Track on Machine Learning IV