Analyzing the Effectiveness of Adversary Modeling in Security Games

Authors

  • Thanh Nguyen University of Southern California
  • Rong Yang University of Southern California
  • Amos Azaria Bar-Ilan University
  • Sarit Kraus Bar-Ilan University and University of Maryland
  • Milind Tambe University of Southern California

DOI:

https://doi.org/10.1609/aaai.v27i1.8599

Keywords:

security games, boundedly rationaly, human decision making, Stackelberg games

Abstract

Recent deployments of Stackelberg security games (SSG) have led to two competing approaches to handle boundedly rational human adversaries: (1) integrating models of human (adversary) decision-making into the game-theoretic algorithms, and (2) applying robust optimization techniques that avoid adversary modeling. A recent algorithm (MATCH) based on the second approach was shown to outperform the leading modeling-based algorithm even in the presence of significant amount of data. Is there then any value in using human behavior models in solving SSGs? Through extensive experiments with 547 human subjects playing 11102 games in total, we emphatically answer the question in the affirmative, while providing the following key contributions: (i) we show that our algorithm, SU-BRQR, based on a novel integration of human behavior model with the subjective utility function, significantly outperforms both MATCH and its improvements; (ii) we are the first to present experimental results with security intelligence experts, and find that even though the experts are more rational than the Amazon Turk workers, SU-BRQR still outperforms an approach assuming perfect rationality (and to a more limited extent MATCH); (iii) we show the advantage of SU-BRQR in a new, large game setting and demonstrate that sufficient data enables it to improve its performance over MATCH.

Downloads

Published

2013-06-30

How to Cite

Nguyen, T., Yang, R., Azaria, A., Kraus, S., & Tambe, M. (2013). Analyzing the Effectiveness of Adversary Modeling in Security Games. Proceedings of the AAAI Conference on Artificial Intelligence, 27(1), 718-724. https://doi.org/10.1609/aaai.v27i1.8599