Bayesian Learning of Other Agents' Finite Controllers for Interactive POMDPs

Authors

  • Alessandro Panella University of Illinois at Chicago
  • Piotr Gmytrasiewicz University of Illinois at Chicago

DOI:

https://doi.org/10.1609/aaai.v30i1.10136

Keywords:

Multiagent Systems, Opponent Modeling, Bayesian Learning, Dirichlet Process

Abstract

We consider an autonomous agent operating in a stochastic, partially-observable, multiagent environment, that explicitly models the other agents as probabilistic deterministic finite-state controllers (PDFCs) in order to predict their actions. We assume that such models are not given to the agent, but instead must be learned from (possibly imperfect) observations of the other agents' behavior. The agent maintains a belief over the other agents' models, that is updated via Bayesian inference. To represent this belief we place a flexible stick-breaking distribution over PDFCs, that allows the posterior to concentrate around controllers whose size is not bounded and scales with the complexity of the observed data. Since this Bayesian inference task is not analytically tractable, we devise a Markov chain Monte Carlo algorithm to approximate the posterior distribution. The agent then embeds the result of this inference into its own decision making process using the interactive POMDP framework. We show that our learning algorithm can learn agent models that are behaviorally accurate for problems of varying complexity, and that the agent's performance increases as a result.

Downloads

Published

2016-03-03

How to Cite

Panella, A., & Gmytrasiewicz, P. (2016). Bayesian Learning of Other Agents’ Finite Controllers for Interactive POMDPs. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10136

Issue

Section

Technical Papers: Multiagent Systems