IMPLANT: An Integrated MDP and POMDP Learning AgeNT for Adaptive Games

Authors

  • Chek Tien Tan National University of Singapore
  • Ho-lun Cheng National University of Singapore

DOI:

https://doi.org/10.1609/aiide.v5i1.12352

Keywords:

MDP, POMDP, Decision Theory, Game Agents, Learning

Abstract

This paper proposes an Integrated MDP and POMDP Learning AgeNT (IMPLANT) architecture for adaptation in modern games. The modern game world basically involves a human player acting in a virtual environment, which implies that the problem can be decomposed into two parts, namely a partially observable player model, and a completely observable game environment. With this concept, the IMPLANT architecture extracts both a POMDP and MDP abstract model from the underlying game world. The abstract action policies are then pre-computed from each model and merged into a single optimal policy. Coupled with a small amount of online learning, the architecture is able to adapt both the player and the game environment in plausible pre-computation and query times. Empirical proof of concept is shown based on an implementation in a tennis video game, where the IMPLANT agent is shown to exhibit a superior balance in adaptation performance and speed, when compared against other agent implementations.

Downloads

Published

2009-10-16

How to Cite

Tan, C. T., & Cheng, H.- lun. (2009). IMPLANT: An Integrated MDP and POMDP Learning AgeNT for Adaptive Games. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 5(1), 94-99. https://doi.org/10.1609/aiide.v5i1.12352