Algorithmic Improvements for Deep Reinforcement Learning Applied to Interactive Fiction

Authors

  • Vishal Jain Mila
  • William Fedus Mila
  • Hugo Larochelle Google Brain
  • Doina Precup Mila
  • Marc G. Bellemare Google Brain

DOI:

https://doi.org/10.1609/aaai.v34i04.5857

Abstract

Text-based games are a natural challenge domain for deep reinforcement learning algorithms. Their state and action spaces are combinatorially large, their reward function is sparse, and they are partially observable: the agent is informed of the consequences of its actions through textual feedback. In this paper we emphasize this latter point and consider the design of a deep reinforcement learning agent that can play from feedback alone. Our design recognizes and takes advantage of the structural characteristics of text-based games. We first propose a contextualisation mechanism, based on accumulated reward, which simplifies the learning problem and mitigates partial observability. We then study different methods that rely on the notion that most actions are ineffectual in any given situation, following Zahavy et al.'s idea of an admissible action. We evaluate these techniques in a series of text-based games of increasing difficulty based on the TextWorld framework, as well as the iconic game Zork. Empirically, we find that these techniques improve the performance of a baseline deep reinforcement learning agent applied to text-based games.

Downloads

Published

2020-04-03

How to Cite

Jain, V., Fedus, W., Larochelle, H., Precup, D., & Bellemare, M. G. (2020). Algorithmic Improvements for Deep Reinforcement Learning Applied to Interactive Fiction. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 4328-4336. https://doi.org/10.1609/aaai.v34i04.5857

Issue

Section

AAAI Technical Track: Machine Learning