Stabilizing Policy Gradient Methods via Reward Profiling

Authors

  • Shihab Ahmed University of Central Florida
  • El Houcine Bergou Mohammed VI Polytechnic University
  • Yue Wang University of Central Florida
  • Aritra Dutta University of Central Florida

DOI:

https://doi.org/10.1609/aaai.v40i24.39035

Abstract

Policy gradient methods, which have been extensively studied in the last decade, offer an effective and efficient framework for reinforcement learning problems. However, their performances can often be unsatisfactory, suffering from unreliable reward improvements and slow convergence, due to high variance in gradient estimations. In this paper, we propose a universal reward profiling framework that can be seamlessly integrated with any policy gradient algorithm, where we selectively update the policy based on high-confidence performance estimations. We theoretically justify that our technique will not slow down the convergence of the baseline policy gradient methods, but with high probability, will result in stable and monotonic improvements of their performance. Empirically, on eight continuous‐control benchmarks (Box2D and MuJoCo/PyBullet), our profiling yields up to 1.5x faster convergence to near‐optimal returns, up to 1.75x reduction in return variance on some setups. Our profiling approach offers a general, theoretically grounded path to more reliable and efficient policy learning in complex environments.

Published

2026-03-14

How to Cite

Ahmed, S., Bergou, E. H., Wang, Y., & Dutta, A. (2026). Stabilizing Policy Gradient Methods via Reward Profiling. Proceedings of the AAAI Conference on Artificial Intelligence, 40(24), 19560-19568. https://doi.org/10.1609/aaai.v40i24.39035

Issue

Section

AAAI Technical Track on Machine Learning I