Resilient Multi-Agent Reinforcement Learning with Adversarial Value Decomposition

Authors

  • Thomy Phan LMU Munich
  • Lenz Belzner MaibornWolff
  • Thomas Gabor LMU Munich
  • Andreas Sedlmeier LMU Munich
  • Fabian Ritz LMU Munich
  • Claudia Linnhoff-Popien LMU Munich

DOI:

https://doi.org/10.1609/aaai.v35i13.17348

Keywords:

Multiagent Learning, Adversarial Learning & Robustness, Adversarial Agents, Reinforcement Learning

Abstract

We focus on resilience in cooperative multi-agent systems, where agents can change their behavior due to udpates or failures of hardware and software components. Current state-of-the-art approaches to cooperative multi-agent reinforcement learning (MARL) have either focused on idealized settings without any changes or on very specialized scenarios, where the number of changing agents is fixed, e.g., in extreme cases with only one productive agent. Therefore, we propose Resilient Adversarial value Decomposition with Antagonist-Ratios (RADAR). RADAR offers a value decomposition scheme to train competing teams of varying size for improved resilience against arbitrary agent changes. We evaluate RADAR in two cooperative multi-agent domains and show that RADAR achieves better worst case performance w.r.t. arbitrary agent changes than state-of-the-art MARL.

Downloads

Published

2021-05-18

How to Cite

Phan, T., Belzner, L., Gabor, T., Sedlmeier, A., Ritz, F., & Linnhoff-Popien, C. (2021). Resilient Multi-Agent Reinforcement Learning with Adversarial Value Decomposition. Proceedings of the AAAI Conference on Artificial Intelligence, 35(13), 11308-11316. https://doi.org/10.1609/aaai.v35i13.17348

Issue

Section

AAAI Technical Track on Multiagent Systems