Policy Learning for Continuous Space Security Games Using Neural Networks

Authors

  • Nitin Kamra University of Southern California
  • Umang Gupta University of Southern California
  • Fei Fang Carnegie Mellon University
  • Yan Liu University of Southern California
  • Milind Tambe University of Southern California

DOI:

https://doi.org/10.1609/aaai.v32i1.11446

Keywords:

Stackelberg Security Games, Game Theory, Nash Equilibrium, Stackelberg Equilibrium, Defender Policy Optimization, Policy Gradient, Fictitious Play

Abstract

A wealth of algorithms centered around (integer) linear programming have been proposed to compute equilibrium strategies in security games with discrete states and actions. However, in practice many domains possess continuous state and action spaces. In this paper, we consider a continuous space security game model with infinite-size action sets for players and present a novel deep learning based approach to extend the existing toolkit for solving security games. Specifically, we present (i) OptGradFP, a novel and general algorithm that searches for the optimal defender strategy in a parameterized continuous search space, and can also be used to learn policies over multiple game states simultaneously; (ii) OptGradFP-NN, a convolutional neural network based implementation of OptGradFP for continuous space security games. We demonstrate the potential to predict good defender strategies via experiments and analysis of OptGradFP and OptGradFP-NN on discrete and continuous game settings.

Downloads

Published

2018-04-25

How to Cite

Kamra, N., Gupta, U., Fang, F., Liu, Y., & Tambe, M. (2018). Policy Learning for Continuous Space Security Games Using Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11446

Issue

Section

AAAI Technical Track: Game Theory and Economic Paradigms