Minimax Regret Optimisation for Robust Planning in Uncertain Markov Decision Processes

Authors

  • Marc Rigter Oxford Robotics Institute, University of Oxford, United Kingdom
  • Bruno Lacerda Oxford Robotics Institute, University of Oxford, United Kingdom
  • Nick Hawes Oxford Robotics Institute, University of Oxford, United Kingdom

Keywords:

Planning with Markov Models (MDPs, POMDPs), Sequential Decision Making, Planning under Uncertainty, Reinforcement Learning

Abstract

The parameters for a Markov Decision Process (MDP) often cannot be specified exactly. Uncertain MDPs (UMDPs) capture this model ambiguity by defining sets which the parameters belong to. Minimax regret has been proposed as an objective for planning in UMDPs to find robust policies which are not overly conservative. In this work, we focus on planning for Stochastic Shortest Path (SSP) UMDPs with uncertain cost and transition functions. We introduce a Bellman equation to compute the regret for a policy. We propose a dynamic programming algorithm that utilises the regret Bellman equation, and show that it optimises minimax regret exactly for UMDPs with independent uncertainties. For coupled uncertainties, we extend our approach to use options to enable a trade off between computation and solution quality. We evaluate our approach on both synthetic and real-world domains, showing that it significantly outperforms existing baselines.

Downloads

Published

2021-05-18

How to Cite

Rigter, M., Lacerda, B., & Hawes, N. (2021). Minimax Regret Optimisation for Robust Planning in Uncertain Markov Decision Processes. Proceedings of the AAAI Conference on Artificial Intelligence, 35(13), 11930-11938. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17417

Issue

Section

AAAI Technical Track on Planning, Routing, and Scheduling