TY - JOUR AU - Wang, Jianhong AU - Zhang, Yuan AU - Kim, Tae-Kyun AU - Gu, Yunjie PY - 2020/04/03 Y2 - 2024/03/29 TI - Shapley Q-Value: A Local Reward Approach to Solve Global Reward Games JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 34 IS - 05 SE - AAAI Technical Track: Multiagent Systems DO - 10.1609/aaai.v34i05.6220 UR - https://ojs.aaai.org/index.php/AAAI/article/view/6220 SP - 7285-7292 AB - <p>Cooperative game is a critical research area in the multi-agent reinforcement learning (MARL). Global reward game is a subclass of cooperative games, where all agents aim to maximize the global reward. Credit assignment is an important problem studied in the global reward game. Most of previous works stood by the view of non-cooperative-game theoretical framework with the shared reward approach, i.e., each agent being assigned a shared global reward directly. This, however, may give each agent an inaccurate reward on its contribution to the group, which could cause inefficient learning. To deal with this problem, we i) introduce a cooperative-game theoretical framework called extended convex game (ECG) that is a superset of global reward game, and ii) propose a local reward approach called Shapley Q-value. Shapley Q-value is able to distribute the global reward, reflecting each agent's own contribution in contrast to the shared reward approach. Moreover, we derive an MARL algorithm called Shapley Q-value deep deterministic policy gradient (SQDDPG), using Shapley Q-value as the critic for each agent. We evaluate SQDDPG on Cooperative Navigation, Prey-and-Predator and Traffic Junction, compared with the state-of-the-art algorithms, e.g., MADDPG, COMA, Independent DDPG and Independent A2C. In the experiments, SQDDPG shows a significant improvement on the convergence rate. Finally, we plot Shapley Q-value and validate the property of fair credit assignment.</p> ER -