Multi-agent Reinforcement Learning for Decentralized Coalition Formation Games

Authors

  • Kshitija Taywade University of Kentucky

DOI:

https://doi.org/10.1609/aaai.v35i18.17866

Keywords:

Matching, Coalition Formation, Multi-agent Reinforcement Learning, Decentralized Learning

Abstract

We study the application of multi-agent reinforcement learning for game-theoretical problems. In particular, we are interested in coalition formation problems and their variants such as hedonic coalition formation games (also called hedonic games), matching (a common type of hedonic game), and coalition formation for task allocation. We consider decentralized multi-agent systems where autonomous agents inhabit an environment without any prior knowledge of other agents or the system. We also consider spatial formulations of these problems. Most of the literature for coalition formation problems does not consider these formulations of the problems because it increases computational complexity significantly. We propose novel decentralized heuristic learning and multi-agent reinforcement learning (MARL) approaches to train agents, and we use game-theoretic evaluation criteria such as optimality, stability, and indices like Shapley value.

Downloads

Published

2021-05-18

How to Cite

Taywade, K. (2021). Multi-agent Reinforcement Learning for Decentralized Coalition Formation Games. Proceedings of the AAAI Conference on Artificial Intelligence, 35(18), 15738-15739. https://doi.org/10.1609/aaai.v35i18.17866

Issue

Section

The Twenty-Sixth AAAI/SIGAI Doctoral Consortium