Approximate Bilevel Difference Convex Programming for Bayesian Risk Markov Decision Processes

Authors

  • Yifan Lin Georgia Institute of Technology
  • Enlu Zhou Georgia Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v39i25.34862

Abstract

We consider infinite-horizon Markov Decision Processes where parameters, such as transition probabilities, are unknown and estimated from data. The popular distributionally robust approach to addressing the parameter uncertainty can sometimes be overly conservative. In this paper, we utilize the recently proposed formulation, Bayesian risk Markov Decision Process (BR-MDP), to address parameter (or epistemic) uncertainty in MDPs. To solve the infinite-horizon BR-MDP with a class of convex risk measures, we propose a computationally efficient approach called approximate bilevel difference convex programming (ABDCP). The optimization is performed offline and produces the optimal policy that is represented as a finite state controller with desirable performance guarantees. We also demonstrate the empirical performance of the BR-MDP formulation and the proposed algorithm.

Published

2025-04-11

How to Cite

Lin, Y., & Zhou, E. (2025). Approximate Bilevel Difference Convex Programming for Bayesian Risk Markov Decision Processes. Proceedings of the AAAI Conference on Artificial Intelligence, 39(25), 26605–26613. https://doi.org/10.1609/aaai.v39i25.34862

Issue

Section

AAAI Technical Track on Planning, Routing, and Scheduling