Learning a Priority Ordering for Prioritized Planning in Multi-Agent Path Finding

Authors

  • Shuyang Zhang University of Southern California
  • Jiaoyang Li University of Southern California
  • Taoan Huang University of Southern California
  • Sven Koenig University of Southern California
  • Bistra Dilkina University of Southern California

DOI:

https://doi.org/10.1609/socs.v15i1.21769

Keywords:

Machine And Deep Learning In Search, Search In Robotics

Abstract

Prioritized Planning (PP) is a fast and popular framework for solving Multi-Agent Path Finding, but its solution quality depends heavily on the predetermined priority ordering of the agents. Current PP algorithms use either greedy policies or random assignments to determine a total priority ordering, but none of them dominates the others in terms of the success rate and solution quality (measured by the sum-of-costs). We propose a machine-learning (ML) framework to learn a good priority ordering for PP. We develop two models, namely ML-T, which is trained on a total priority ordering, and ML-P, which is trained on a partial priority ordering. We propose to boost the effectiveness of PP by further applying stochastic ranking and random restarts. The results show that our ML-guided PP algorithms outperform the existing PP algorithms in success rate, runtime, and solution quality on small maps in most cases and are competitive with them on large maps despite the difficulty of collecting training data on these maps.

Downloads

Published

2022-07-17