Scalable First-Order Methods for Robust MDPs
DOI:
https://doi.org/10.1609/aaai.v35i13.17435Keywords:
Sequential Decision MakingAbstract
Robust Markov Decision Processes (MDPs) are a powerful framework for modeling sequential decision making problems with model uncertainty. This paper proposes the first first-order framework for solving robust MDPs. Our algorithm interleaves primal-dual first-order updates with approximate Value Iteration updates. By carefully controlling the tradeoff between the accuracy and cost of Value Iteration updates, we achieve an ergodic convergence rate that is significantly better than classical Value Iteration algorithms in terms of the number of states S and the number of actions A on ellipsoidal and Kullback-Leibler s-rectangular uncertainty sets. In numerical experiments on ellipsoidal uncertainty sets we show that our algorithm is significantly more scalable than state-of-the-art approaches. Our framework is also the first one to solve robust MDPs with s-rectangular KL uncertainty sets.Downloads
Published
2021-05-18
How to Cite
Grand-Clément, J., & Kroer, C. (2021). Scalable First-Order Methods for Robust MDPs. Proceedings of the AAAI Conference on Artificial Intelligence, 35(13), 12086-12094. https://doi.org/10.1609/aaai.v35i13.17435
Issue
Section
AAAI Technical Track on Reasoning under Uncertainty