Multi-Agent Incentive Communication via Decentralized Teammate Modeling
DOI:
https://doi.org/10.1609/aaai.v36i9.21179Keywords:
Multiagent Systems (MAS), Machine Learning (ML)Abstract
Effective communication can improve coordination in cooperative multi-agent reinforcement learning (MARL). One popular communication scheme is exchanging agents' local observations or latent embeddings and using them to augment individual local policy input. Such a communication paradigm can reduce uncertainty for local decision-making and induce implicit coordination. However, it enlarges agents' local policy spaces and increases learning complexity, leading to poor coordination in complex settings. To handle this limitation, this paper proposes a novel framework named Multi-Agent Incentive Communication (MAIC) that allows each agent to learn to generate incentive messages and bias other agents' value functions directly, resulting in effective explicit coordination. Our method firstly learns targeted teammate models, with which each agent can anticipate the teammate's action selection and generate tailored messages to specific agents. We further introduce a novel regularization to leverage interaction sparsity and improve communication efficiency. MAIC is agnostic to specific MARL algorithms and can be flexibly integrated with different value function factorization methods. Empirical results demonstrate that our method significantly outperforms baselines and achieves excellent performance on multiple cooperative MARL tasks.Downloads
Published
2022-06-28
How to Cite
Yuan, L., Wang, J., Zhang, F., Wang, C., Zhang, Z., Yu, Y., & Zhang, C. (2022). Multi-Agent Incentive Communication via Decentralized Teammate Modeling. Proceedings of the AAAI Conference on Artificial Intelligence, 36(9), 9466-9474. https://doi.org/10.1609/aaai.v36i9.21179
Issue
Section
AAAI Technical Track on Multiagent Systems