Sample Efficient Reinforcement Learning with REINFORCE
Keywords:Reinforcement Learning, Optimization, Planning under Uncertainty, Planning with Markov Models (MDPs, POMDPs)
AbstractPolicy gradient methods are among the most effective methods for large-scale reinforcement learning, and their empirical success has prompted several works that develop the foundation of their global convergence theory. However, prior works have either required exact gradients or state-action visitation measure based mini-batch stochastic gradients with a diverging batch size, which limit their applicability in practical scenarios. In this paper, we consider classical policy gradient methods that compute an approximate gradient with a single trajectory or a fixed size mini-batch of trajectories under soft-max parametrization and log-barrier regularization, along with the widely-used REINFORCE gradient estimation procedure. By controlling the number of "bad" episodes and resorting to the classical doubling trick, we establish an anytime sub-linear high probability regret bound as well as almost sure global convergence of the average regret with an asymptotically sub-linear rate. These provide the first set of global convergence and sample efficiency results for the well-known REINFORCE algorithm and contribute to a better understanding of its performance in practice.
How to Cite
Zhang, J., Kim, J., O’Donoghue, B., & Boyd, S. (2021). Sample Efficient Reinforcement Learning with REINFORCE. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 10887-10895. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17300
AAAI Technical Track on Machine Learning V