TY - JOUR
AU - Dalal, Gal
AU - Szorenyi, Balazs
AU - Thoppe, Gugan
PY - 2020/04/03
Y2 - 2024/02/24
TI - A Tale of Two-Timescale Reinforcement Learning with the Tightest Finite-Time Bound
JF - Proceedings of the AAAI Conference on Artificial Intelligence
JA - AAAI
VL - 34
IS - 04
SE - AAAI Technical Track: Machine Learning
DO - 10.1609/aaai.v34i04.5779
UR - https://ojs.aaai.org/index.php/AAAI/article/view/5779
SP - 3701-3708
AB - <p>Policy evaluation in reinforcement learning is often conducted using two-timescale stochastic approximation, which results in various gradient temporal difference methods such as GTD(0), GTD2, and TDC. Here, we provide convergence rate bounds for this suite of algorithms. Algorithms such as these have two iterates, <em>θ</em><sub><em>n</em></sub> and <em>w</em><sub><em>n</em></sub>, which are updated using two distinct stepsize sequences, <em>α</em><sub><em>n</em></sub> and <em>β</em><sub><em>n</em></sub>, respectively. Assuming <em>α</em><sub><em>n</em></sub> = <em>n</em><sup>−<em>α</em></sup> and <em>β</em><sub><em>n</em></sub> = <em>n</em><sup>−<em>β</em></sup> with 1 > <em>α</em> > <em>β</em> > 0, we show that, with high probability, the two iterates converge to their respective solutions <em>θ</em><sup>*</sup> and <em>w</em><sup>*</sup> at rates given by ∥<em>θ</em><sub><em>n</em></sub> - <em>θ</em><sup>*</sup>∥ = <em>Õ</em>(<em>n</em><sup>−<em>α</em>/2</sup>) and ∥<em>w</em><sub><em>n</em></sub> - <em>w</em><sup>*</sup>∥ = <em>Õ</em>(<em>n</em><sup>−<em>β</em>/2</sup>); here, <em>Õ</em> hides logarithmic terms. Via comparable lower bounds, we show that these bounds are, in fact, tight. To the best of our knowledge, ours is the first finite-time analysis which achieves these rates. While it was known that the two timescale components decouple asymptotically, our results depict this phenomenon more explicitly by showing that it in fact happens from some finite time onwards. Lastly, compared to existing works, our result applies to a broader family of stepsizes, including non-square summable ones.</p>
ER -