Learn to Follow: Decentralized Lifelong Multi-Agent Pathfinding via Planning and Learning

Authors

  • Alexey Skrynnik AIRI, Moscow, Russia Federal Research Center for Computer Science and Control of Russian Academy of Sciences, Moscow, Russia
  • Anton Andreychuk AIRI, Moscow, Russia
  • Maria Nesterova Federal Research Center for Computer Science and Control of Russian Academy of Sciences, Moscow, Russia MIPT, Dolgoprudny, Russia
  • Konstantin Yakovlev Federal Research Center for Computer Science and Control of Russian Academy of Sciences, Moscow, Russia AIRI, Moscow, Russia
  • Aleksandr Panov AIRI, Moscow, Russia MIPT, Dolgoprudny, Russia

DOI:

https://doi.org/10.1609/aaai.v38i16.29704

Keywords:

MAS: Multiagent Planning, ROB: Multi-Robot Systems, ML: Reinforcement Learning, PRS: Planning/Scheduling and Learning, SO: Heuristic Search

Abstract

Multi-agent Pathfinding (MAPF) problem generally asks to find a set of conflict-free paths for a set of agents confined to a graph and is typically solved in a centralized fashion. Conversely, in this work, we investigate the decentralized MAPF setting, when the central controller that possesses all the information on the agents' locations and goals is absent and the agents have to sequentially decide the actions on their own without having access to the full state of the environment. We focus on the practically important lifelong variant of MAPF, which involves continuously assigning new goals to the agents upon arrival to the previous ones. To address this complex problem, we propose a method that integrates two complementary approaches: planning with heuristic search and reinforcement learning through policy optimization. Planning is utilized to construct and re-plan individual paths. We enhance our planning algorithm with a dedicated technique tailored to avoid congestion and increase the throughput of the system. We employ reinforcement learning to discover the collision avoidance policies that effectively guide the agents along the paths. The policy is implemented as a neural network and is effectively trained without any reward-shaping or external guidance. We evaluate our method on a wide range of setups comparing it to the state-of-the-art solvers. The results show that our method consistently outperforms the learnable competitors, showing higher throughput and better ability to generalize to the maps that were unseen at the training stage. Moreover our solver outperforms a rule-based one in terms of throughput and is an order of magnitude faster than a state-of-the-art search-based solver. The code is available at https://github.com/AIRI-Institute/learn-to-follow.

Published

2024-03-24

How to Cite

Skrynnik, A., Andreychuk, A., Nesterova, M., Yakovlev, K., & Panov, A. (2024). Learn to Follow: Decentralized Lifelong Multi-Agent Pathfinding via Planning and Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16), 17541-17549. https://doi.org/10.1609/aaai.v38i16.29704

Issue

Section

AAAI Technical Track on Multiagent Systems