Data Poisoning to Fake a Nash Equilibria for Markov Games

Authors

  • Young Wu University of Wisconsin - Madison
  • Jeremy McMahan University of Wisconsin - Madison
  • Xiaojin Zhu University of Wisconsin - Madison
  • Qiaomin Xie University of Wisconsin - Madison

DOI:

https://doi.org/10.1609/aaai.v38i14.29529

Keywords:

ML: Adversarial Learning & Robustness, GTEP: Adversarial Learning

Abstract

We characterize offline data poisoning attacks on Multi-Agent Reinforcement Learning (MARL), where an attacker may change a data set in an attempt to install a (potentially fictitious) unique Markov-perfect Nash equilibrium for a two-player zero-sum Markov game. We propose the unique Nash set, namely the set of games, specified by their Q functions, with a specific joint policy being the unique Nash equilibrium. The unique Nash set is central to poisoning attacks because the attack is successful if and only if data poisoning pushes all plausible games inside it. The unique Nash set generalizes the reward polytope commonly used in inverse reinforcement learning to MARL. For zero-sum Markov games, both the inverse Nash set and the set of plausible games induced by data are polytopes in the Q function space. We exhibit a linear program to efficiently compute the optimal poisoning attack. Our work sheds light on the structure of data poisoning attacks on offline MARL, a necessary step before one can design more robust MARL algorithms.

Published

2024-03-24

How to Cite

Wu, Y., McMahan, J., Zhu, X., & Xie, Q. (2024). Data Poisoning to Fake a Nash Equilibria for Markov Games. Proceedings of the AAAI Conference on Artificial Intelligence, 38(14), 15979-15987. https://doi.org/10.1609/aaai.v38i14.29529

Issue

Section

AAAI Technical Track on Machine Learning V