Certified Policy Smoothing for Cooperative Multi-Agent Reinforcement Learning

Authors

  • Ronghui Mu Lancaster University
  • Wenjie Ruan University of Exeter
  • Leandro Soriano Marcolino Lancaster University
  • Gaojie Jin University of Liverpool
  • Qiang Ni Lancaster University

DOI:

https://doi.org/10.1609/aaai.v37i12.26756

Keywords:

General

Abstract

Cooperative multi-agent reinforcement learning (c-MARL) is widely applied in safety-critical scenarios, thus the analysis of robustness for c-MARL models is profoundly important. However, robustness certification for c-MARLs has not yet been explored in the community. In this paper, we propose a novel certification method, which is the first work to leverage a scalable approach for c-MARLs to determine actions with guaranteed certified bounds. c-MARL certification poses two key challenges compared to single-agent systems: (i) the accumulated uncertainty as the number of agents increases; (ii) the potential lack of impact when changing the action of a single agent into a global team reward. These challenges prevent us from directly using existing algorithms. Hence, we employ the false discovery rate (FDR) controlling procedure considering the importance of each agent to certify per-state robustness. We further propose a tree-search-based algorithm to find a lower bound of the global reward under the minimal certified perturbation. As our method is general, it can also be applied in a single-agent environment. We empirically show that our certification bounds are much tighter than those of state-of-the-art RL certification solutions. We also evaluate our method on two popular c-MARL algorithms: QMIX and VDN, under two different environments, with two and four agents. The experimental results show that our method can certify the robustness of all c-MARL models in various environments. Our tool CertifyCMARL is available at https://github.com/TrustAI/CertifyCMARL.

Downloads

Published

2023-06-26

How to Cite

Mu, R., Ruan, W., Soriano Marcolino, L., Jin, G., & Ni, Q. (2023). Certified Policy Smoothing for Cooperative Multi-Agent Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12), 15046-15054. https://doi.org/10.1609/aaai.v37i12.26756

Issue

Section

AAAI Special Track on Safe and Robust AI