Reinforcement Learning for Zone Based Multiagent Pathfinding under Uncertainty

Authors

  • Jiajing Ling Singapore Management University
  • Tarun Gupta University of Oxford
  • Akshat Kumar Singapore Management University

DOI:

https://doi.org/10.1609/icaps.v30i1.6751

Abstract

We address the problem of multiple agents finding their paths from respective sources to destination nodes in a graph (also called MAPF). Most existing approaches assume that all agents move at fixed speed, and that a single node accommodates only a single agent. Motivated by the emerging applications of autonomous vehicles such as drone traffic management, we present zone-based path finding (or ZBPF) where agents move among zones, and agents' movements require uncertain travel time. Furthermore, each zone can accommodate multiple agents (as per its capacity). We also develop a simulator for ZBPF which provides a clean interface from the simulation environment to learning algorithms. We develop a novel formulation of the ZBPF problem using difference-of-convex functions (DC) programming. The resulting approach can be used for policy learning using samples from the simulator. We also present a multiagent credit assignment scheme that helps our learning approach converge faster. Empirical results in a number of 2D and 3D instances show that our approach can effectively minimize congestion in zones, while ensuring agents reach their final destinations.

Downloads

Published

2020-06-01

How to Cite

Ling, J., Gupta, T., & Kumar, A. (2020). Reinforcement Learning for Zone Based Multiagent Pathfinding under Uncertainty. Proceedings of the International Conference on Automated Planning and Scheduling, 30(1), 551-559. https://doi.org/10.1609/icaps.v30i1.6751