Regret Analysis of Policy Gradient Algorithm for Infinite Horizon Average Reward Markov Decision Processes

Authors

  • Qinbo Bai Purdue University
  • Washim Uddin Mondal Purdue University
  • Vaneet Aggarwal Purdue University

DOI:

https://doi.org/10.1609/aaai.v38i10.28973

Keywords:

ML: Reinforcement Learning, ML: Learning Theory

Abstract

In this paper, we consider an infinite horizon average reward Markov Decision Process (MDP). Distinguishing itself from existing works within this context, our approach harnesses the power of the general policy gradient-based algorithm, liberating it from the constraints of assuming a linear MDP structure. We propose a vanilla policy gradient-based algorithm and show its global convergence property. We then prove that the proposed algorithm has O(T^3/4) regret. Remarkably, this paper marks a pioneering effort by presenting the first exploration into regret bound computation for the general parameterized policy gradient algorithm in the context of average reward scenarios.

Published

2024-03-24

How to Cite

Bai, Q., Mondal, W. U., & Aggarwal, V. (2024). Regret Analysis of Policy Gradient Algorithm for Infinite Horizon Average Reward Markov Decision Processes. Proceedings of the AAAI Conference on Artificial Intelligence, 38(10), 10980-10988. https://doi.org/10.1609/aaai.v38i10.28973

Issue

Section

AAAI Technical Track on Machine Learning I