Multi-agent Reinforcement Learning for Decentralized Coalition Formation Games
Keywords:Matching, Coalition Formation, Multi-agent Reinforcement Learning, Decentralized Learning
AbstractWe study the application of multi-agent reinforcement learning for game-theoretical problems. In particular, we are interested in coalition formation problems and their variants such as hedonic coalition formation games (also called hedonic games), matching (a common type of hedonic game), and coalition formation for task allocation. We consider decentralized multi-agent systems where autonomous agents inhabit an environment without any prior knowledge of other agents or the system. We also consider spatial formulations of these problems. Most of the literature for coalition formation problems does not consider these formulations of the problems because it increases computational complexity significantly. We propose novel decentralized heuristic learning and multi-agent reinforcement learning (MARL) approaches to train agents, and we use game-theoretic evaluation criteria such as optimality, stability, and indices like Shapley value.
How to Cite
Taywade, K. (2021). Multi-agent Reinforcement Learning for Decentralized Coalition Formation Games. Proceedings of the AAAI Conference on Artificial Intelligence, 35(18), 15738-15739. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17866
The Twenty-Sixth AAAI/SIGAI Doctoral Consortium