Recursive Reasoning Graph for Multi-Agent Reinforcement Learning

Authors

  • Xiaobai Ma Stanford University
  • David Isele Honda Research Institute US
  • Jayesh K. Gupta Stanford University
  • Kikuo Fujimura Honda Research Institute US
  • Mykel J. Kochenderfer Stanford University

DOI:

https://doi.org/10.1609/aaai.v36i7.20733

Keywords:

Machine Learning (ML)

Abstract

Multi-agent reinforcement learning (MARL) provides an efficient way for simultaneously learning policies for multiple agents interacting with each other. However, in scenarios requiring complex interactions, existing algorithms can suffer from an inability to accurately anticipate the influence of self-actions on other agents. Incorporating an ability to reason about other agents' potential responses can allow an agent to formulate more effective strategies. This paper adopts a recursive reasoning model in a centralized-training-decentralized-execution framework to help learning agents better cooperate with or compete against others. The proposed algorithm, referred to as the Recursive Reasoning Graph (R2G), shows state-of-the-art performance on multiple multi-agent particle and robotics games.

Downloads

Published

2022-06-28

How to Cite

Ma, X., Isele, D., Gupta, J. K., Fujimura, K., & Kochenderfer, M. J. (2022). Recursive Reasoning Graph for Multi-Agent Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 36(7), 7664-7671. https://doi.org/10.1609/aaai.v36i7.20733

Issue

Section

AAAI Technical Track on Machine Learning II