Hierarchical Mean-Field Deep Reinforcement Learning for Large-Scale Multiagent Systems

Authors

  • Chao Yu Sun Yat-sen University, Guangzhou, China Pengcheng Laboratory, Shenzhen, China

DOI:

https://doi.org/10.1609/aaai.v37i10.26387

Keywords:

MAS: Multiagent Learning, ML: Reinforcement Learning Algorithms

Abstract

Learning for efficient coordination in large-scale multiagent systems suffers from the problem of the curse of dimensionality due to the exponential growth of agent interactions. Mean-Field (MF)-based methods address this issue by transforming the interactions within the whole system into a single agent played with the average effect of its neighbors. However, considering the neighbors merely by their average may ignore the varying influences of each neighbor, and learning with this kind of local average effect would likely lead to inferior system performance due to lack of an efficient coordination mechanism in the whole population level. In this work, we propose a Hierarchical Mean-Field (HMF) learning framework to further improve the performance of existing MF methods. The basic idea is to approximate the average effect for a sub-group of agents by considering their different influences within the sub-group, and realize population-level coordination through the interactions among different sub-groups. Empirical studies show that HMF significantly outperforms existing baselines on both challenging cooperative and mixed cooperative-competitive tasks with different scales of agent populations.

Downloads

Published

2023-06-26

How to Cite

Yu, C. (2023). Hierarchical Mean-Field Deep Reinforcement Learning for Large-Scale Multiagent Systems. Proceedings of the AAAI Conference on Artificial Intelligence, 37(10), 11744-11752. https://doi.org/10.1609/aaai.v37i10.26387

Issue

Section

AAAI Technical Track on Multiagent Systems