Scalable First-Order Methods for Robust MDPs

Authors

  • Julien Grand-Clément IEOR Department, Columbia University
  • Christian Kroer IEOR Department, Columbia University

Keywords:

Sequential Decision Making

Abstract

Robust Markov Decision Processes (MDPs) are a powerful framework for modeling sequential decision making problems with model uncertainty. This paper proposes the first first-order framework for solving robust MDPs. Our algorithm interleaves primal-dual first-order updates with approximate Value Iteration updates. By carefully controlling the tradeoff between the accuracy and cost of Value Iteration updates, we achieve an ergodic convergence rate that is significantly better than classical Value Iteration algorithms in terms of the number of states S and the number of actions A on ellipsoidal and Kullback-Leibler s-rectangular uncertainty sets. In numerical experiments on ellipsoidal uncertainty sets we show that our algorithm is significantly more scalable than state-of-the-art approaches. Our framework is also the first one to solve robust MDPs with s-rectangular KL uncertainty sets.

Downloads

Published

2021-05-18

How to Cite

Grand-Clément, J., & Kroer, C. (2021). Scalable First-Order Methods for Robust MDPs. Proceedings of the AAAI Conference on Artificial Intelligence, 35(13), 12086-12094. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17435

Issue

Section

AAAI Technical Track on Reasoning under Uncertainty