RL Generalization in a Theory of Mind Game Through a Sleep Metaphor (Student Abstract)
Keywords:Reinforcement Learning, Game Playing, Generalization, Theory Of Mind
AbstractTraining agents to learn efficiently in multi-agent environments can benefit from the explicit modelling of other agent's beliefs, especially in complex limited-information games such as the Hanabi card game. However, generalization is also highly relevant to performance in these games, though model comparisons at large training timescales can be difficult. In this work, we address this by introducing a novel model trained using a sleep metaphor on a reduced complexity version of the Hanabi game. This sleep metaphor consists an altered training regiment, as well as an information-theoretic constraint on the agent's policy. Results from experimentation demonstrate improved performance through this sleep-metaphor method, and provide a promising motivation for using similar techniques in more complex methods that incorporate explicit models of other agent's beliefs.
How to Cite
Malloy, T., Klinger, T., Liu, M., Tesauro, G., Riemer, M., & Sims, C. R. (2021). RL Generalization in a Theory of Mind Game Through a Sleep Metaphor (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 35(18), 15841-15842. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17917
AAAI Student Abstract and Poster Program