Learning Multi-Agent Action Coordination via Electing First-Move Agent
Keywords:
Action Coordination, Multi-agent Reinforcement Learning, Election MechanismAbstract
Learning to coordinate actions among agents is essential in complicated multi-agent systems. Prior works are constrained mainly by the assumption that all agents act simultaneously, and asynchronous action coordination between agents is rarely considered. This paper introduces a bi-level multi-agent decision hierarchy for coordinated behavior planning. We propose a novel election mechanism in which we adopt a graph convolutional network to model the interaction among agents and elect a first-move agent for asynchronous guidance. We also propose a dynamically weighted mixing network to effectively reduce the misestimation of the value function during training. This work is the first to explicitly model the asynchronous multi-agent action coordination, and this explicitness enables to choose the optimal first-move agent. The results on Cooperative Navigation and Google Football demonstrate that the proposed algorithm can achieve superior performance in cooperative environments. Our code is available at https://github.com/Amanda-1997/EFA-DWM.Downloads
Published
2022-06-13
How to Cite
Ruan, J., Meng, L., Xiong, X., Xing, D., & Xu, B. (2022). Learning Multi-Agent Action Coordination via Electing First-Move Agent. Proceedings of the International Conference on Automated Planning and Scheduling, 32(1), 624-628. Retrieved from https://ojs.aaai.org/index.php/ICAPS/article/view/19850
Issue
Section
Planning and Learning Track