Gradient Temporal Difference with Momentum: Stability and Convergence

Authors

  • Rohan Deb Indian Institute of Science, Bangalore
  • Shalabh Bhatnagar Indian Institute of Science, Bangalore

DOI:

https://doi.org/10.1609/aaai.v36i6.20601

Keywords:

Machine Learning (ML), Reasoning Under Uncertainty (RU), Search And Optimization (SO)

Abstract

Gradient temporal difference (Gradient TD) algorithms are a popular class of stochastic approximation (SA) algorithms used for policy evaluation in reinforcement learning. Here, we consider Gradient TD algorithms with an additional heavy ball momentum term and provide choice of step size and momentum parameter that ensures almost sure convergence of these algorithms asymptotically. In doing so, we decompose the heavy ball Gradient TD iterates into three separate iterates with different step sizes. We first analyze these iterates under one-timescale SA setting using results from current literature. However, the one-timescale case is restrictive and a more general analysis can be provided by looking at a three-timescale decomposition of the iterates. In the process we provide the first conditions for stability and convergence of general three-timescale SA. We then prove that the heavy ball Gradient TD algorithm is convergent using our three-timescale SA analysis. Finally, we evaluate these algorithms on standard RL problems and report improvement in performance over the vanilla algorithms.

Downloads

Published

2022-06-28

How to Cite

Deb, R., & Bhatnagar, S. (2022). Gradient Temporal Difference with Momentum: Stability and Convergence. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6488-6496. https://doi.org/10.1609/aaai.v36i6.20601

Issue

Section

AAAI Technical Track on Machine Learning I