Action Schema Networks: Generalised Policies With Deep Learning

Authors

  • Sam Toyer Australian National University
  • Felipe Trevizan Australian National University; Data61, CSIRO
  • Sylvie Thiébaux Australian National University
  • Lexing Xie Australian National University; Data to Decisions CRC

DOI:

https://doi.org/10.1609/aaai.v32i1.12089

Abstract

In this paper, we introduce the Action Schema Network (ASNet): a neural network architecture for learning generalised policies for probabilistic planning problems. By mimicking the relational structure of planning problems, ASNets are able to adopt a weight sharing scheme which allows the network to be applied to any problem from a given planning domain. This allows the cost of training the network to be amortised over all problems in that domain. Further, we propose a training method which balances exploration and supervised training on small problems to produce a policy which remains robust when evaluated on larger problems. In experiments, we show that ASNet's learning capability allows it to significantly outperform traditional non-learning planners in several challenging domains.

Downloads

Published

2018-04-26

How to Cite

Toyer, S., Trevizan, F., Thiébaux, S., & Xie, L. (2018). Action Schema Networks: Generalised Policies With Deep Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.12089