Consensus Learning for Cooperative Multi-Agent Reinforcement Learning
DOI:
https://doi.org/10.1609/aaai.v37i10.26385Keywords:
MAS: Multiagent LearningAbstract
Almost all multi-agent reinforcement learning algorithms without communication follow the principle of centralized training with decentralized execution. During the centralized training, agents can be guided by the same signals, such as the global state. However, agents lack the shared signal and choose actions given local observations during execution. Inspired by viewpoint invariance and contrastive learning, we propose consensus learning for cooperative multi-agent reinforcement learning in this study. Although based on local observations, different agents can infer the same consensus in discrete spaces without communication. We feed the inferred one-hot consensus to the network of agents as an explicit input in a decentralized way, thereby fostering their cooperative spirit. With minor model modifications, our suggested framework can be extended to a variety of multi-agent reinforcement learning algorithms. Moreover, we carry out these variants on some fully cooperative tasks and get convincing results.Downloads
Published
2023-06-26
How to Cite
Xu, Z., Zhang, B., Li, D., Zhang, Z., Zhou, G., Chen, H., & Fan, G. (2023). Consensus Learning for Cooperative Multi-Agent Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(10), 11726-11734. https://doi.org/10.1609/aaai.v37i10.26385
Issue
Section
AAAI Technical Track on Multiagent Systems