Using Natural Language to Improve Hierarchical Reinforcement Learning in Games

Authors

  • Dave Mobley University of Kentucky
  • Adrienne Corwin University of Kentucky
  • Brent Harrison University of Kentucky

DOI:

https://doi.org/10.1609/aiide.v20i1.31881

Abstract

This work investigates how natural language task descriptions can accelerate reinforcement learning in games. Recognizing that human descriptions often imply a hierarchical task structure, we propose a method to extract this hierarchy and convert it into "options" – policies for solving subtasks. These options are generated by grounding natural language descriptions into environment states, which are then used as task boundaries to learn option policies either by leveraging prior successful traces or from human created walkthroughs. We evaluate our approach in both a simpler grid-world environment and the more complex text-based game Zork, comparing option-based agents against standard Q-learning and random agents. Our results demonstrate the effectiveness of incorporating natural language task knowledge for faster and more efficient reinforcement learning across different environments and Q-learning algorithms, including tabular Q-learning and Deep Q-Networks.

Downloads

Published

2024-11-15

How to Cite

Mobley, D., Corwin, A., & Harrison, B. (2024). Using Natural Language to Improve Hierarchical Reinforcement Learning in Games. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 20(1), 208-216. https://doi.org/10.1609/aiide.v20i1.31881