Utilising Uncertainty for Efficient Learning of Likely-Admissible Heuristics

Authors

  • Ofir Marom University of the Witwatersrand
  • Benjamin Rosman University of the Witwatersrand

DOI:

https://doi.org/10.1609/icaps.v30i1.6752

Abstract

Likely-admissible heuristics have previously been introduced as heuristics that are admissible with some probability. While such heuristics only produce likely-optimal plans, they have the advantage that it is more feasible to learn such heuristics from training data using machine learning algorithms. Naturally, it is ideal if this training data consists of optimal plans, but such data is often prohibitive to produce. To overcome this, previous work introduced a bootstrap procedure that generates training data using random task generation that incrementally learns on more complex tasks. However, 1) using random task generation is inefficient and; 2) the procedure generates non-optimal plans for training and this causes errors to compound as learning progresses, resulting in high suboptimality. In this paper we introduce a framework that utilises uncertainty to overcome the shortcomings of previous approaches. In particular, we show that we can use uncertainty to efficiently explore task-space when generating training tasks, and then learn likely-admissible heuristics that produce low suboptimality. We illustrate the advantages of our approach on the 15-puzzle, 24-puzzle, 24-pancake and 15-blocksworld domains using Bayesian neural networks to model uncertainty.

Downloads

Published

2020-06-01

How to Cite

Marom, O., & Rosman, B. (2020). Utilising Uncertainty for Efficient Learning of Likely-Admissible Heuristics. Proceedings of the International Conference on Automated Planning and Scheduling, 30(1), 560-568. https://doi.org/10.1609/icaps.v30i1.6752