RL Generalization in a Theory of Mind Game Through a Sleep Metaphor (Student Abstract)

Authors

  • Tyler Malloy Rensselaer Polytechnic Institute IBM Research AI
  • Tim Klinger IBM Research AI
  • Miao Liu IBM Research AI
  • Gerald Tesauro IBM Research AI
  • Matthew Riemer IBM Research AI
  • Chris R. Sims Rensselaer Polytechnic Institute

DOI:

https://doi.org/10.1609/aaai.v35i18.17917

Keywords:

Reinforcement Learning, Game Playing, Generalization, Theory Of Mind

Abstract

Training agents to learn efficiently in multi-agent environments can benefit from the explicit modelling of other agent's beliefs, especially in complex limited-information games such as the Hanabi card game. However, generalization is also highly relevant to performance in these games, though model comparisons at large training timescales can be difficult. In this work, we address this by introducing a novel model trained using a sleep metaphor on a reduced complexity version of the Hanabi game. This sleep metaphor consists an altered training regiment, as well as an information-theoretic constraint on the agent's policy. Results from experimentation demonstrate improved performance through this sleep-metaphor method, and provide a promising motivation for using similar techniques in more complex methods that incorporate explicit models of other agent's beliefs.

Downloads

Published

2021-05-18

How to Cite

Malloy, T., Klinger, T., Liu, M., Tesauro, G., Riemer, M., & Sims, C. R. (2021). RL Generalization in a Theory of Mind Game Through a Sleep Metaphor (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 35(18), 15841-15842. https://doi.org/10.1609/aaai.v35i18.17917

Issue

Section

AAAI Student Abstract and Poster Program