Proactive Constrained Policy Optimization with Preemptive Penalty

Authors

  • Ning Yang Institute of Automation, Chinese Academy of Sciences
  • Pengyu Wang Institute of Automation, Chinese Academy of Sciences School of Science and Engineering, The Chinese University of Hong Kong (Shenzhen), Longgang, Shenzhen, Guangdong, 518172, P.R. China
  • Guoqing Liu Microsoft Research
  • Haifeng Zhang Institute of Automation, Chinese Academy of Sciences
  • Pin Lyu Institute of Automation, Chinese Academy of Sciences
  • Jun Wang University College London

DOI:

https://doi.org/10.1609/aaai.v40i32.39978

Abstract

Safe Reinforcement Learning (RL) often faces significant issues such as constraint violations and instability, necessitating the use of constrained policy optimization, which seeks optimal policies while ensuring adherence to specific constraints like safety. Typically, constrained optimization problems are addressed by the Lagrangian method, a post-violation remedial approach that may result in oscillations and overshoots. Motivated by this, we propose a novel method named Proactive Constrained Policy Optimization (PCPO) that incorporates a preemptive penalty mechanism. This mechanism integrates barrier items into the objective function as the policy nears the boundary, imposing a cost. Meanwhile, we introduce a constraint-aware intrinsic reward to guide boundary-aware exploration, which is activated only when the policy approaches the constraint boundary. We establish theoretical upper and lower bounds for the duality gap and the performance of the PCPO update, shedding light on the method's convergence characteristics. Additionally, to enhance the optimization performance, we adopt a policy iteration approach. An interesting finding is that PCPO demonstrates significant stability in experiments. Experimental results indicate that the PCPO framework provides a robust solution for policy optimization under constraints, with important implications for future research and practical applications.

Downloads

Published

2026-03-14

How to Cite

Yang, N., Wang, P., Liu, G., Zhang, H., Lyu, P., & Wang, J. (2026). Proactive Constrained Policy Optimization with Preemptive Penalty. Proceedings of the AAAI Conference on Artificial Intelligence, 40(32), 27583–27591. https://doi.org/10.1609/aaai.v40i32.39978

Issue

Section

AAAI Technical Track on Machine Learning IX