Learning Generalised Policies for Numeric Planning

Authors

  • Ryan Xiao Wang School of Computing, Australian National University
  • Sylvie Thiébaux School of Computing, Australian National University LAAS-CNRS, Université de Toulouse

DOI:

https://doi.org/10.1609/icaps.v34i1.31526

Abstract

We extend Action Schema Networks (ASNets) to learn generalised policies for numeric planning, which features quantitative numeric state variables, preconditions and effects. We propose a neural network architecture that can reason about the numeric variables both directly and in context of other variables. We also develop a dynamic exploration algorithm for more efficient training, by better balancing the exploration versus learning tradeoff to account for the greater computational demand of numeric teacher planners. Experimentally, we find that the learned generalised policies are capable of outperforming traditional numeric planners on some domains, and the dynamic exploration algorithm to be on average much faster at learning effective generalised policies than the original ASNets training algorithm.

Downloads

Published

2024-05-30

How to Cite

Wang, R. X., & Thiébaux, S. (2024). Learning Generalised Policies for Numeric Planning. Proceedings of the International Conference on Automated Planning and Scheduling, 34(1), 633-642. https://doi.org/10.1609/icaps.v34i1.31526