Addressing Myopic Constrained POMDP Planning with Recursive Dual Ascent

Authors

  • Paula Stocco Stanford University
  • Suhas Chundi Stanford University
  • Arec Jamgochian Stanford University
  • Mykel J. Kochenderfer Stanford University

DOI:

https://doi.org/10.1609/icaps.v34i1.31518

Abstract

Lagrangian-guided Monte Carlo tree search with global dual ascent has been applied to solve large constrained partially observable Markov decision processes (CPOMDPs) online. In this work, we demonstrate that these global dual parameters can lead to myopic action selection during exploration, ultimately leading to suboptimal decision making. To address this, we introduce history-dependent dual variables that guide local action selection and are optimized with recursive dual ascent. We empirically compare the performance of our approach on a motivating toy example and two large CPOMDPs, demonstrating improved exploration, and ultimately, safer outcomes.

Downloads

Published

2024-05-30

How to Cite

Stocco, P., Chundi, S., Jamgochian, A., & Kochenderfer, M. J. (2024). Addressing Myopic Constrained POMDP Planning with Recursive Dual Ascent. Proceedings of the International Conference on Automated Planning and Scheduling, 34(1), 565-569. https://doi.org/10.1609/icaps.v34i1.31518