Concentration Network for Reinforcement Learning of Large-Scale Multi-Agent Systems
DOI:
https://doi.org/10.1609/aaai.v36i9.21165Keywords:
Multiagent Systems (MAS), Machine Learning (ML)Abstract
When dealing with a series of imminent issues, humans can naturally concentrate on a subset of these concerning issues by prioritizing them according to their contributions to motivational indices, e.g., the probability of winning a game. This idea of concentration offers insights into reinforcement learning of sophisticated Large-scale Multi-Agent Systems (LMAS) participated by hundreds of agents. In such an LMAS, each agent receives a long series of entity observations at each step, which can overwhelm existing aggregation networks such as graph attention networks and cause inefficiency. In this paper, we propose a concentration network called ConcNet. First, ConcNet scores the observed entities considering several motivational indices, e.g., expected survival time and state value of the agents, and then ranks, prunes, and aggregates the encodings of observed entities to extract features. Second, distinct from the well-known attention mechanism, ConcNet has a unique motivational subnetwork to explicitly consider the motivational indices when scoring the observed entities. Furthermore, we present a concentration policy gradient architecture that can learn effective policies in LMAS from scratch. Extensive experiments demonstrate that the presented architecture has excellent scalability and flexibility, and significantly outperforms existing methods on LMAS benchmarks.Downloads
Published
2022-06-28
How to Cite
Fu, Q., Qiu, T., Yi, J., Pu, Z., & Wu, S. (2022). Concentration Network for Reinforcement Learning of Large-Scale Multi-Agent Systems. Proceedings of the AAAI Conference on Artificial Intelligence, 36(9), 9341-9349. https://doi.org/10.1609/aaai.v36i9.21165
Issue
Section
AAAI Technical Track on Multiagent Systems