Sample Efficient Reinforcement Learning with REINFORCE

Authors

  • Junzi Zhang Institute for Computational & Mathematical Engineering, Stanford University, USA
  • Jongho Kim Department of Electrical Engineering, Stanford University, USA
  • Brendan O'Donoghue DeepMind, Google
  • Stephen Boyd Department of Electrical Engineering, Stanford University, USA

DOI:

https://doi.org/10.1609/aaai.v35i12.17300

Keywords:

Reinforcement Learning, Optimization, Planning under Uncertainty, Planning with Markov Models (MDPs, POMDPs)

Abstract

Policy gradient methods are among the most effective methods for large-scale reinforcement learning, and their empirical success has prompted several works that develop the foundation of their global convergence theory. However, prior works have either required exact gradients or state-action visitation measure based mini-batch stochastic gradients with a diverging batch size, which limit their applicability in practical scenarios. In this paper, we consider classical policy gradient methods that compute an approximate gradient with a single trajectory or a fixed size mini-batch of trajectories under soft-max parametrization and log-barrier regularization, along with the widely-used REINFORCE gradient estimation procedure. By controlling the number of "bad" episodes and resorting to the classical doubling trick, we establish an anytime sub-linear high probability regret bound as well as almost sure global convergence of the average regret with an asymptotically sub-linear rate. These provide the first set of global convergence and sample efficiency results for the well-known REINFORCE algorithm and contribute to a better understanding of its performance in practice.

Downloads

Published

2021-05-18

How to Cite

Zhang, J., Kim, J., O’Donoghue, B., & Boyd, S. (2021). Sample Efficient Reinforcement Learning with REINFORCE. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 10887-10895. https://doi.org/10.1609/aaai.v35i12.17300

Issue

Section

AAAI Technical Track on Machine Learning V