Learning Classical Planning Strategies with Policy Gradient


  • Paweł Gomoluch Imperial College London
  • Dalal Alrajeh Imperial College London
  • Alessandra Russo Imperial College London


A common paradigm in classical planning is heuristic forward search. Forward search planners often rely on simple best-first search which remains fixed throughout the search process. In this paper, we introduce a novel search framework capable of alternating between several forward search approaches while solving a particular planning problem. Selection of the approach is performed using a trainable stochastic policy, mapping the state of the search to a probability distribution over the approaches. This enables using policy gradient to learn search strategies tailored to a specific distributions of planning problems and a selected performance metric, e.g. the IPC score. We instantiate the framework by constructing a policy space consisting of five search approaches and a two-dimensional representation of the planner’s state. Then, we train the system on randomly generated problems from five IPC domains using three different performance metrics. Our experimental results show that the learner is able to discover domain-specific search strategies, improving the planner’s performance relative to the baselines of plain bestfirst search and a uniform policy.




How to Cite

Gomoluch, P., Alrajeh, D., & Russo, A. (2019). Learning Classical Planning Strategies with Policy Gradient. Proceedings of the International Conference on Automated Planning and Scheduling, 29(1), 637-645. Retrieved from https://ojs.aaai.org/index.php/ICAPS/article/view/3531