Policy Learning for Continuous Space Security Games Using Neural Networks


  • Nitin Kamra University of Southern California
  • Umang Gupta University of Southern California
  • Fei Fang Carnegie Mellon University
  • Yan Liu University of Southern California
  • Milind Tambe University of Southern California


Stackelberg Security Games, Game Theory, Nash Equilibrium, Stackelberg Equilibrium, Defender Policy Optimization, Policy Gradient, Fictitious Play


A wealth of algorithms centered around (integer) linear programming have been proposed to compute equilibrium strategies in security games with discrete states and actions. However, in practice many domains possess continuous state and action spaces. In this paper, we consider a continuous space security game model with infinite-size action sets for players and present a novel deep learning based approach to extend the existing toolkit for solving security games. Specifically, we present (i) OptGradFP, a novel and general algorithm that searches for the optimal defender strategy in a parameterized continuous search space, and can also be used to learn policies over multiple game states simultaneously; (ii) OptGradFP-NN, a convolutional neural network based implementation of OptGradFP for continuous space security games. We demonstrate the potential to predict good defender strategies via experiments and analysis of OptGradFP and OptGradFP-NN on discrete and continuous game settings.




How to Cite

Kamra, N., Gupta, U., Fang, F., Liu, Y., & Tambe, M. (2018). Policy Learning for Continuous Space Security Games Using Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/11446



AAAI Technical Track: Game Theory and Economic Paradigms