Convergence and Optimality of Policy Gradient Methods in Weakly Smooth Settings

Authors

  • Matthew S. Zhang University of Toronto Vector Institute
  • Murat A Erdogdu University of Toronto Vector Institute
  • Animesh Garg University of Toronto Vector Institute

DOI:

https://doi.org/10.1609/aaai.v36i8.20891

Keywords:

Machine Learning (ML)

Abstract

Policy gradient methods have been frequently applied to problems in control and reinforcement learning with great success, yet existing convergence analysis still relies on non-intuitive, impractical and often opaque conditions. In particular, existing rates are achieved in limited settings, under strict regularity conditions. In this work, we establish explicit convergence rates of policy gradient methods, extending the convergence regime to weakly smooth policy classes with L2 integrable gradient. We provide intuitive examples to illustrate the insight behind these new conditions. Notably, our analysis also shows that convergence rates are achievable for both the standard policy gradient and the natural policy gradient algorithms under these assumptions. Lastly we provide performance guarantees for the converged policies.

Downloads

Published

2022-06-28

How to Cite

Zhang, M. S., Erdogdu, M. A., & Garg, A. (2022). Convergence and Optimality of Policy Gradient Methods in Weakly Smooth Settings. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8), 9066-9073. https://doi.org/10.1609/aaai.v36i8.20891

Issue

Section

AAAI Technical Track on Machine Learning III