Target Surveillance in Adversarial Environments Using POMDPs

Authors

  • Maxim Egorov Stanford University
  • Mykel Kochenderfer Stanford University
  • Jaak Uudmae Stanford University

DOI:

https://doi.org/10.1609/aaai.v30i1.10126

Keywords:

POMDPs, Level-k Reasoning, Target Surveillance, Adversarial Modeling

Abstract

This paper introduces an extension of the target surveillance problem in which the surveillance agent is exposed to an adversarial ballistic threat. The problem is formulated as a mixed observability Markov decision process (MOMDP), which is a factored variant of the partially observable Markov decision process, to account for state and dynamic uncertainties. The control policy resulting from solving the MOMDP aims to optimize the frequency of target observations and minimize exposure to the ballistic threat. The adversary’s behavior is modeled with a level-k policy, which is used to construct the state transition of the MOMDP. The approach is empirically evaluated against a MOMDP adversary and against a human opponent in a target surveillance computer game. The empirical results demonstrate that, on average, level 3 MOMDP policies outperform lower level reasoning policies as well as human players.

Downloads

Published

2016-03-03

How to Cite

Egorov, M., Kochenderfer, M., & Uudmae, J. (2016). Target Surveillance in Adversarial Environments Using POMDPs. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10126

Issue

Section

Technical Papers: Multiagent Systems