Deep Q-learning From Demonstrations

Authors

  • Todd Hester Google DeepMind
  • Matej Vecerik Google DeepMind
  • Olivier Pietquin Google DeepMind
  • Marc Lanctot Google DeepMind
  • Tom Schaul Google DeepMind
  • Bilal Piot Google DeepMind
  • Dan Horgan Google DeepMind
  • John Quan Google DeepMind
  • Andrew Sendonaris Google DeepMind
  • Ian Osband Google DeepMind
  • Gabriel Dulac-Arnold Google DeepMind
  • John Agapiou Google DeepMind
  • Joel Leibo Google DeepMind
  • Audrunas Gruslys Google DeepMind

Keywords:

Reinforcement Learning, Learning from Demonstratinos

Abstract

Deep reinforcement learning (RL) has achieved several high profile successes in difficult decision-making problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism. DQfD works by combining temporal difference updates with supervised classification of the demonstrator’s actions. We show that DQfD has better initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the first million steps on 41 of 42 games and on average it takes PDD DQN 83 million steps to catch up to DQfD’s performance. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstrations to achieve state-of-the-art results for 11 games. Finally, we show that DQfD performs better than three related algorithms for incorporating demonstration data into DQN.

Downloads

Published

2018-04-29

How to Cite

Hester, T., Vecerik, M., Pietquin, O., Lanctot, M., Schaul, T., Piot, B., Horgan, D., Quan, J., Sendonaris, A., Osband, I., Dulac-Arnold, G., Agapiou, J., Leibo, J., & Gruslys, A. (2018). Deep Q-learning From Demonstrations. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/11757