Constrained Risk-Averse Markov Decision Processes

Authors

  • Mohamadreza Ahmadi California Institute of Technology, 1200 E. California Blvd., Pasadena, CA 91125
  • Ugo Rosolia California Institute of Technology, 1200 E. California Blvd., Pasadena, CA 91125
  • Michel D. Ingham NASA Jet Propulsion Laboratory, 4800 Oak Grove Dr, Pasadena, CA 91109
  • Richard M. Murray California Institute of Technology, 1200 E. California Blvd., Pasadena, CA 91125
  • Aaron D. Ames California Institute of Technology, 1200 E. California Blvd., Pasadena, CA 91125

DOI:

https://doi.org/10.1609/aaai.v35i13.17393

Keywords:

Planning with Markov Models (MDPs, POMDPs)

Abstract

We consider the problem of designing policies for Markov decision processes (MDPs) with dynamic coherent risk objectives and constraints. We begin by formulating the problem in a Lagrangian framework. Under the assumption that the risk objectives and constraints can be represented by a Markov risk transition mapping, we propose an optimization-based method to synthesize Markovian policies that lower-bound the constrained risk-averse problem. We demonstrate that the formulated optimization problems are in the form of difference convex programs (DCPs) and can be solved by the disciplined convex-concave programming (DCCP) framework. We show that these results generalize linear programs for constrained MDPs with total discounted expected costs and constraints. Finally, we illustrate the effectiveness of the proposed method with numerical experiments on a rover navigation problem involving conditional-value-at-risk (CVaR) and entropic-value-at-risk (EVaR) coherent risk measures.

Downloads

Published

2021-05-18

How to Cite

Ahmadi, M., Rosolia, U., Ingham, M. D., Murray, R. M., & Ames, A. D. (2021). Constrained Risk-Averse Markov Decision Processes. Proceedings of the AAAI Conference on Artificial Intelligence, 35(13), 11718-11725. https://doi.org/10.1609/aaai.v35i13.17393

Issue

Section

AAAI Technical Track on Planning, Routing, and Scheduling