Automatic Curriculum Graph Generation for Reinforcement Learning Agents

Authors

  • Maxwell Svetlik University of Texas at Austin
  • Matteo Leonetti University of Leeds
  • Jivko Sinapov University of Texas at Austin
  • Rishi Shah University of Texas at Austin
  • Nick Walker University of Texas at Austin
  • Peter Stone University of Texas at Austin

DOI:

https://doi.org/10.1609/aaai.v31i1.10933

Keywords:

curriculum learning, reinforcement learning, transfer learning, machine learning

Abstract

In recent years, research has shown that transfer learning methods can be leveraged to construct curricula that sequence a series of simpler tasks such that performance on a final target task is improved. A major limitation of existing approaches is that such curricula are handcrafted by humans that are typically domain experts. To address this limitation, we introduce a method to generate a curriculum based on task descriptors and a novel metric of transfer potential. Our method automatically generates a curriculum as a directed acyclic graph (as opposed to a linear sequence as done in existing work). Experiments in both discrete and continuous domains show that our method produces curricula that improve the agent's learning performance when compared to the baseline condition of learning on the target task from scratch.

Downloads

Published

2017-02-13

How to Cite

Svetlik, M., Leonetti, M., Sinapov, J., Shah, R., Walker, N., & Stone, P. (2017). Automatic Curriculum Graph Generation for Reinforcement Learning Agents. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10933