Training Deep Reactive Policies for Probabilistic Planning Problems

Authors

  • Murugeswari Issakkimuthu Oregon State University
  • Alan Fern Oregon State University
  • Prasad Tadepalli Oregon State University

DOI:

https://doi.org/10.1609/icaps.v28i1.13873

Keywords:

Probabilistic Planning, Deep Learning

Abstract

State-of-the-art probabilistic planners typically apply look- ahead search and reasoning at each step to make a decision. While this approach can enable high-quality decisions, it can be computationally expensive for problems that require fast decision making. In this paper, we investigate the potential for deep learning to replace search by fast reactive policies. We focus on supervised learning of deep reactive policies for probabilistic planning problems described in RDDL. A key challenge is to explore the large design space of network architectures and training methods, which was critical to prior deep learning successes. We investigate a number of choices in this space and conduct experiments across a set of benchmark problems. Our results show that effective deep reactive policies can be learned for many benchmark problems and that leveraging the planning problem description to define the network structure can be beneficial.

Downloads

Published

2018-06-15

How to Cite

Issakkimuthu, M., Fern, A., & Tadepalli, P. (2018). Training Deep Reactive Policies for Probabilistic Planning Problems. Proceedings of the International Conference on Automated Planning and Scheduling, 28(1), 422-430. https://doi.org/10.1609/icaps.v28i1.13873