The Self Organization of Context for Learning in Multiagent Games

Authors

  • Chris White University of Virginia
  • David Brogan University of Virginia

DOI:

https://doi.org/10.1609/aiide.v2i1.18752

Abstract

Reinforcement learning is an effective machine learning paradigm in domains represented by compact and discrete state-action spaces. In high-dimensional and continuous domains, tile coding with linear function approximation has been widely used to circumvent the curse of dimensionality, but it suffers from the drawback that human-guided identification of features is required to create effective tilings. The challenge is to find tilings that preserve the context necessary to evaluate the value of a state-action pair while limit- ing memory requirements. The technique presented in this paper addresses the difficulty of identifying context in high-dimensional domains. We have chosen RoboCup simulated soccer as a domain because its high-dimensional continuous state space makes it a formidable challenge for reinforcement learning algorithms. Using self-organizing maps and reinforcement learning in a two-pass process, our technique scales to large state spaces without requiring a large amount of domain knowledge to automatically form abstractions over the state space. Results show that our algorithm learns to play the game of soccer better than a contemporary hand-coded opponent.

Published

2021-09-29

How to Cite

White, C., & Brogan, D. (2021). The Self Organization of Context for Learning in Multiagent Games. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 2(1), 92-97. https://doi.org/10.1609/aiide.v2i1.18752