MAPDP: Cooperative Multi-Agent Reinforcement Learning to Solve Pickup and Delivery Problems
Keywords:Planning, Routing, And Scheduling (PRS)
AbstractCooperative Pickup and Delivery Problem (PDP), as a variant of the typical Vehicle Routing Problems (VRP), is an important formulation in many real-world applications, such as on-demand delivery, industrial warehousing, etc. It is of great importance to efficiently provide high-quality solutions of cooperative PDP. However, it is not trivial to provide effective solutions directly due to two major challenges: 1) the structural dependency between pickup and delivery pairs require explicit modeling and representation. 2) the cooperation between different vehicles is highly related to the solution exploration and difficult to model. In this paper, we propose a novel multi-agent reinforcement learning based framework to solve the cooperative PDP (MAPDP). First, we design a paired context embedding to well measure the dependency of different nodes considering their structural limits. Second, we utilize cooperative multi-agent decoders to leverage the decision dependence among different vehicle agents based on a special communication embedding. Third, we design a novel cooperative A2C algorithm to train the integrated model. We conduct extensive experiments on a randomly generated dataset and a real-world dataset. Experiments result shown that the proposed MAPDP outperform all other baselines by at least 1.64\% in all settings, and shows significant computation speed during solution inference.
How to Cite
Zong, Z., Zheng, M., Li, Y., & Jin, D. (2022). MAPDP: Cooperative Multi-Agent Reinforcement Learning to Solve Pickup and Delivery Problems. Proceedings of the AAAI Conference on Artificial Intelligence, 36(9), 9980-9988. https://doi.org/10.1609/aaai.v36i9.21236
AAAI Technical Track on Planning, Routing, and Scheduling