Improving Deep Reinforcement Learning in Minecraft with Action Advice

Authors

  • Spencer Frazier Georgia Institute of Technology
  • Mark Riedl Georgia Institute of Technology

DOI:

https://doi.org/10.1609/aiide.v15i1.5237

Abstract

Training deep reinforcement learning agents complex behaviors in 3D virtual environments requires significant computational resources. This is especially true in environments with high degrees of aliasing, where many states share nearly identical visual features. Minecraft is an exemplar of such an environment. We hypothesize that interactive machine learning (IML), wherein human teachers play a direct role in training through demonstrations, critique, or action advice, may alleviate agent susceptibility to aliasing. However, interactive machine learning is only practical when the number of human interactions is limited, requiring a balance between human teacher effort and agent performance. We conduct experiments with two reinforcement learning algorithms which enable human teachers to give action advice—Feedback Arbitration, and Newtonian Action Advice—under visual aliasing conditions. To assess potential cognitive load per advice type, we vary the accuracy and frequency of various human action advice techniques. The training efficiency, robustness against infrequent and inaccurate advisor input, and sensitivity to aliasing are examined.

Downloads

Published

2019-10-08

How to Cite

Frazier, S., & Riedl, M. (2019). Improving Deep Reinforcement Learning in Minecraft with Action Advice. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 15(1), 146-152. https://doi.org/10.1609/aiide.v15i1.5237