Sketched Newton Value Iteration for Large-Scale Markov Decision Processes

Authors

  • Jinsong Liu Shanghai University of Finance and Economics
  • Chenghan Xie Fudan University
  • Qi Deng Shanghai University of Finance and Economics
  • Dongdong Ge Shanghai University of Finance and Economics
  • Yinyu Ye Stanford University

DOI:

https://doi.org/10.1609/aaai.v38i12.29301

Keywords:

ML: Optimization, ML: Reinforcement Learning

Abstract

Value Iteration (VI) is one of the most classic algorithms for solving Markov Decision Processes (MDPs), which lays the foundations for various more advanced reinforcement learning algorithms, such as Q-learning. VI may take a large number of iterations to converge as it is a first-order method. In this paper, we introduce the Newton Value Iteration (NVI) algorithm, which eliminates the impact of action space dimension compared to some previous second-order methods. Consequently, NVI can efficiently handle MDPs with large action spaces. Building upon NVI, we propose a novel approach called Sketched Newton Value Iteration (SNVI) to tackle MDPs with both large state and action spaces. SNVI not only inherits the stability and fast convergence advantages of second-order algorithms, but also significantly reduces computational complexity, making it highly scalable. Extensive experiments demonstrate the superiority of our algorithms over traditional VI and previously proposed second-order VI algorithms.

Published

2024-03-24

How to Cite

Liu, J., Xie, C., Deng, Q., Ge, D., & Ye, Y. (2024). Sketched Newton Value Iteration for Large-Scale Markov Decision Processes. Proceedings of the AAAI Conference on Artificial Intelligence, 38(12), 13936–13944. https://doi.org/10.1609/aaai.v38i12.29301

Issue

Section

AAAI Technical Track on Machine Learning III