Improved Worst-Case Regret Bounds for Randomized Least-Squares Value Iteration

Authors

  • Priyank Agrawal University of Illinois at Urbana-Champaign
  • Jinglin Chen University of Illinois at Urbana-Champaign
  • Nan Jiang University of Illinois at Urbana-Champaign

Keywords:

Reinforcement Learning, Online Learning & Bandits

Abstract

This paper studies regret minimization with randomized value functions in reinforcement learning. In tabular finite-horizon Markov Decision Processes, we introduce a clipping variant of one classical Thompson Sampling (TS)-like algorithm, randomized least-squares value iteration (RLSVI). Our $\tilde{\mathrm{O}}(H^2S\sqrt{AT})$ high-probability worst-case regret bound improves the previous sharpest worst-case regret bounds for RLSVI and matches the existing state-of-the-art worst-case TS-based regret bounds.

Downloads

Published

2021-05-18

How to Cite

Agrawal, P., Chen, J., & Jiang, N. (2021). Improved Worst-Case Regret Bounds for Randomized Least-Squares Value Iteration. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 6566-6573. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16813

Issue

Section

AAAI Technical Track on Machine Learning I