Bayesian Optimized Monte Carlo Planning

Authors

  • John Mern Stanford University
  • Anil Yildiz Stanford University
  • Zachary Sunberg University of Colorado, Boulder, CO
  • Tapan Mukerji Stanford University
  • Mykel J. Kochenderfer Stanford University

DOI:

https://doi.org/10.1609/aaai.v35i13.17411

Keywords:

Planning with Markov Models (MDPs, POMDPs), Planning under Uncertainty, Sequential Decision Making, Online Learning & Bandits

Abstract

Online solvers for partially observable Markov decision processes have difficulty scaling to problems with large action spaces. Monte Carlo tree search with progressive widening attempts to improve scaling by sampling from the action space to construct a policy search tree. The performance of progressive widening search is dependent upon the action sampling policy, often requiring problem-specific samplers. In this work, we present a general method for efficient action sampling based on Bayesian optimization. The proposed method uses a Gaussian process to model a belief over the action-value function and selects the action that will maximize the expected improvement in the optimal action value. We implement the proposed approach in a new online tree search algorithm called Bayesian Optimized Monte Carlo Planning (BOMCP). Several experiments show that BOMCP is better able to scale to large action space POMDPs than existing state-of-the-art tree search solvers.

Downloads

Published

2021-05-18

How to Cite

Mern, J., Yildiz, A., Sunberg, Z., Mukerji, T., & Kochenderfer, M. J. (2021). Bayesian Optimized Monte Carlo Planning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(13), 11880-11887. https://doi.org/10.1609/aaai.v35i13.17411

Issue

Section

AAAI Technical Track on Planning, Routing, and Scheduling