Improved POMDP Tree Search Planning with Prioritized Action Branching

Authors

  • John Mern Stanford University
  • Anil Yildiz Stanford University
  • Lawrence Bush General Motors Research & Development
  • Tapan Mukerji Stanford University
  • Mykel J. Kochenderfer Stanford University

Keywords:

Planning with Markov Models (MDPs, POMDPs), Online Learning & Bandits, Planning under Uncertainty, Sequential Decision Making

Abstract

Online solvers for partially observable Markov decision processes have difficulty scaling to problems with large action spaces. This paper proposes a method called PA-POMCPOW to sample a subset of the action space that provides varying mixtures of exploitation and exploration for inclusion in a search tree. The proposed method first evaluates the action space according to a score function that is a linear combination of expected reward and expected information gain. The actions with the highest score are then added to the search tree during tree expansion. Experiments show that PA-POMCPOW is able to outperform existing state-of-the-art solvers on problems with large discrete action spaces.

Downloads

Published

2021-05-18

How to Cite

Mern, J., Yildiz, A., Bush, L., Mukerji, T., & Kochenderfer, M. J. (2021). Improved POMDP Tree Search Planning with Prioritized Action Branching. Proceedings of the AAAI Conference on Artificial Intelligence, 35(13), 11888-11894. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17412

Issue

Section

AAAI Technical Track on Planning, Routing, and Scheduling