Robust Multi-Agent Coordination via Evolutionary Generation of Auxiliary Adversarial Attackers

Authors

  • Lei Yuan Nanjing University Polixir Technologies
  • Ziqian Zhang Nanjing University
  • Ke Xue Nanjing University
  • Hao Yin Nanjing University
  • Feng Chen Nanjing University
  • Cong Guan Nanjing University
  • Lihe Li Nanjing University
  • Chao Qian Nanjing University
  • Yang Yu Nanjing University Polixir Technologies

DOI:

https://doi.org/10.1609/aaai.v37i10.26388

Keywords:

MAS: Coordination and Collaboration, MAS: Adversarial Agents, MAS: Agent-Based Simulation and Emergent Behavior, MAS: Agreement, Argumentation & Negotiation, MAS: Mechanism Design, MAS: Multiagent Learning, MAS: Multiagent Planning, MAS: Multiagent Systems Under Uncertainty

Abstract

Cooperative Multi-agent Reinforcement Learning (CMARL) has shown to be promising for many real-world applications. Previous works mainly focus on improving coordination ability via solving MARL-specific challenges (e.g., non-stationarity, credit assignment, scalability), but ignore the policy perturbation issue when testing in a different environment. This issue hasn't been considered in problem formulation or efficient algorithm design. To address this issue, we firstly model the problem as a Limited Policy Adversary Dec-POMDP (LPA-Dec-POMDP), where some coordinators from a team might accidentally and unpredictably encounter a limited number of malicious action attacks, but the regular coordinators still strive for the intended goal. Then, we propose Robust Multi-Agent Coordination via Evolutionary Generation of Auxiliary Adversarial Attackers (ROMANCE), which enables the trained policy to encounter diversified and strong auxiliary adversarial attacks during training, thus achieving high robustness under various policy perturbations. Concretely, to avoid the ego-system overfitting to a specific attacker, we maintain a set of attackers, which is optimized to guarantee the attackers high attacking quality and behavior diversity. The goal of quality is to minimize the ego-system coordination effect, and a novel diversity regularizer based on sparse action is applied to diversify the behaviors among attackers. The ego-system is then paired with a population of attackers selected from the maintained attacker set, and alternately trained against the constantly evolving attackers. Extensive experiments on multiple scenarios from SMAC indicate our ROMANCE provides comparable or better robustness and generalization ability than other baselines.

Downloads

Published

2023-06-26

How to Cite

Yuan, L., Zhang, Z., Xue, K., Yin, H., Chen, F., Guan, C., Li, L., Qian, C., & Yu, Y. (2023). Robust Multi-Agent Coordination via Evolutionary Generation of Auxiliary Adversarial Attackers. Proceedings of the AAAI Conference on Artificial Intelligence, 37(10), 11753-11762. https://doi.org/10.1609/aaai.v37i10.26388

Issue

Section

AAAI Technical Track on Multiagent Systems