Model Checking for Adversarial Multi-Agent Reinforcement Learning with Reactive Defense Methods

Authors

  • Dennis Gross Radboud University
  • Christoph Schmidl Radboud University
  • Nils Jansen Radboud University
  • Guillermo A. Pérez University of Antwerp

DOI:

https://doi.org/10.1609/icaps.v33i1.27191

Keywords:

Learning for planning and scheduling, Multi-agent and distributed planning, Uncertainty and stochasticity in planning and scheduling

Abstract

Cooperative multi-agent reinforcement learning (CMARL) enables agents to achieve a common objective. However, the safety (a.k.a. robustness) of the CMARL agents operating in critical environments is not guaranteed. In particular, agents are susceptible to adversarial noise in their observations that can mislead their decision-making. So-called denoisers aim to remove adversarial noise from observations, yet, they are often error-prone. A key challenge for any rigorous safety verification technique in CMARL settings is the large number of states and transitions, which generally prohibits the construction of a (monolithic) model of the whole system. In this paper, we present a verification method for CMARL agents in settings with or without adversarial attacks or denoisers. Our method relies on a tight integration of CMARL and a verification technique referred to as model checking. We showcase the applicability of our method on various benchmarks from different domains. Our experiments show that our method is indeed suited to verify CMARL agents and that it scales better than a naive approach to model checking.

Downloads

Published

2023-07-01

How to Cite

Gross, D., Schmidl, C., Jansen, N., & Pérez, G. A. (2023). Model Checking for Adversarial Multi-Agent Reinforcement Learning with Reactive Defense Methods. Proceedings of the International Conference on Automated Planning and Scheduling, 33(1), 162-170. https://doi.org/10.1609/icaps.v33i1.27191