An On-Line Planner for POMDPs with Large Discrete Action Space: A Quantile-Based Approach

Authors

  • Erli Wang The University of Queensland, Australia
  • Hanna Kurniawati The University of Queensland, Australia
  • Dirk Kroese The University of Queensland, Australia

DOI:

https://doi.org/10.1609/icaps.v28i1.13906

Keywords:

Partially Observable Markov Decision Processes (POMDP), Planning under uncertainty

Abstract

Making principled decisions in the presence of uncertainty is often facilitated by Partially Observable Markov Decision Processes (POMDPs). Despite tremendous advances in POMDP solvers, finding good policies with large action spaces remains difficult. To alleviate this difficulty, this paper presents an on-line approximate solver, called Quantile-Based Action Selector (QBASE). It uses quantile-statistics to adaptively evaluate a small subset of the action space without sacrificing the quality of the generated decision strategies by much. Experiments on four different robotics tasks with up to 10,000 actions indicate that QBASE can generate substantially better strategies than a state-of-the-art method.

Downloads

Published

2018-06-15

How to Cite

Wang, E., Kurniawati, H., & Kroese, D. (2018). An On-Line Planner for POMDPs with Large Discrete Action Space: A Quantile-Based Approach. Proceedings of the International Conference on Automated Planning and Scheduling, 28(1), 273-277. https://doi.org/10.1609/icaps.v28i1.13906