Solving Uncertain MDPs by Reusing State Information and Plans

Authors

  • Ping Hou New Mexico State University
  • William Yeoh New Mexico State University
  • Tran Cao Son New Mexico State University

DOI:

https://doi.org/10.1609/aaai.v28i1.9029

Abstract

While MDPs are powerful tools for modeling sequential decision making problems under uncertainty, they are sensitive to the accuracy of their parameters. MDPs with uncertainty in their parameters are called Uncertain MDPs. In this paper, we introduce a general framework that allows off-the-shelf MDP algorithms to solve Uncertain MDPs by planning based on currently available information and replan if and when the problem changes. We demonstrate the generality of this approach by showing that it can use the VI, TVI, ILAO*, LRTDP, and UCT algorithms to solve Uncertain MDPs. We experimentally show that our approach is typically faster than replanning from scratch and we also provide a way to estimate the amount of speedup based on the amount of information being reused.

Downloads

Published

2014-06-21

How to Cite

Hou, P., Yeoh, W., & Son, T. C. (2014). Solving Uncertain MDPs by Reusing State Information and Plans. Proceedings of the AAAI Conference on Artificial Intelligence, 28(1). https://doi.org/10.1609/aaai.v28i1.9029