Tuning Belief Revision for Coordination with Inconsistent Teammates

Authors

  • Trevor Sarratt University of California Santa Cruz
  • Arnav Jhala University of California Santa Cruz

DOI:

https://doi.org/10.1609/aiide.v11i1.12797

Abstract

Coordination with an unknown human teammate is a notable challenge for cooperative agents. Behavior of human players in games with cooperating AI agents is often sub-optimal and inconsistent leading to choreographed and limited cooperative scenarios in games. This paper considers the difficulty of cooperating with a teammate whose goal and corresponding behavior change periodically. Previous work uses Bayesian models for updating beliefs about cooperating agents based on observations. We describe belief models for on-line planning, discuss tuning in the presence of noisy observations, and demonstrate empirically its effectiveness in coordinating with inconsistent agents in a simple domain. Further work in this area promises to lead to techniques for more interesting cooperative AI in games.

Downloads

Published

2021-06-24

How to Cite

Sarratt, T., & Jhala, A. (2021). Tuning Belief Revision for Coordination with Inconsistent Teammates. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 11(1), 177-183. https://doi.org/10.1609/aiide.v11i1.12797