Planning for Risk-Aversion and Expected Value in MDPs

Authors

  • Marc Rigter Oxford Robotics Institute, University of Oxford
  • Paul Duckworth Oxford Robotics Institute, University of Oxford
  • Bruno Lacerda Oxford Robotics Institute, University of Oxford
  • Nick Hawes Oxford Robotics Institute, University of Oxford

Keywords:

Markov Decision Processes, Risk, Conditional Value At Risk, Multi-objective Planning

Abstract

Planning in Markov decision processes (MDPs) typically optimises the expected cost. However, optimising the expectation does not consider the risk that for any given run of the MDP, the total cost received may be unacceptably high. An alternative approach is to find a policy which optimises a riskaverse objective such as conditional value at risk (CVaR). However, optimising the CVaR alone may result in poor performance in expectation. In this work, we begin by showing that there can be multiple policies which obtain the optimal CVaR. This motivates us to propose a lexicographic approach which minimises the expected cost subject to the constraint that the CVaR of the total cost is optimal. We present an algorithm for this problem and evaluate our approach on four domains. Our results demonstrate that our lexicographic approach improves the expected cost compared to the state of the art algorithm, while achieving the optimal CVaR.

Downloads

Published

2022-06-13

How to Cite

Rigter, M., Duckworth, P., Lacerda, B., & Hawes, N. (2022). Planning for Risk-Aversion and Expected Value in MDPs. Proceedings of the International Conference on Automated Planning and Scheduling, 32(1), 307-315. Retrieved from https://ojs.aaai.org/index.php/ICAPS/article/view/19814