Regret-Based Optimization and Preference Elicitation for Stackelberg Security Games with Uncertainty

Authors

  • Thanh Nguyen University of Southern California
  • Amulya Yadav University of Southern California
  • Bo An Nanyang Technological University
  • Milind Tambe University of Southern California
  • Craig Boutilier University of Toronto

DOI:

https://doi.org/10.1609/aaai.v28i1.8804

Keywords:

security game, minimax regret, uncertainty, preference elicitation

Abstract

Stackelberg security games (SSGs) have been deployed in a number of real-world domains. One key challenge in these applications is the assessment of attacker payoffs, which may not be perfectly known. Previous work has studied SSGs with uncertain payoffs modeled by interval uncertainty and provided maximin-based robust solutions. In contrast, in this work we propose the use of the less conservative minimax regret decision criterion for such payoff-uncertain SSGs and present the first algorithms for computing minimax regret for SSGs. We also address the challenge of preference elicitation, using minimax regret to develop the first elicitation strategies for SSGs. Experimental results validate the effectiveness of our approaches.

Downloads

Published

2014-06-21

How to Cite

Nguyen, T., Yadav, A., An, B., Tambe, M., & Boutilier, C. (2014). Regret-Based Optimization and Preference Elicitation for Stackelberg Security Games with Uncertainty. Proceedings of the AAAI Conference on Artificial Intelligence, 28(1). https://doi.org/10.1609/aaai.v28i1.8804

Issue

Section

AAAI Technical Track: Game Theory and Economic Paradigms