Augmenting Policy Learning with Routines Discovered from a Single Demonstration

Authors

  • Zelin Zhao Shanghai Jiao Tong University
  • Chuang Gan MIT-IBM Watson AI Lab
  • Jiajun Wu Stanford University
  • Xiaoxiao Guo MIT-IBM Watson AI Lab
  • Joshua B. Tenenbaum Massachusetts Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v35i12.17316

Keywords:

Imitation Learning & Inverse Reinforcement Learning, Reinforcement Learning, Neuro-Symbolic AI (NSAI)

Abstract

Humans can abstract prior knowledge from very little data and use it to boost skill learning. In this paper, we propose routine-augmented policy learning (RAPL), which discovers routines composed of primitive actions from a single demonstration and uses discovered routines to augment policy learning. To discover routines from the demonstration, we first abstract routine candidates by identifying grammar over the demonstrated action trajectory. Then, the best routines measured by length and frequency are selected to form a routine library. We propose to learn policy simultaneously at primitive-level and routine-level with discovered routines, leveraging the temporal structure of routines. Our approach enables imitating expert behavior at multiple temporal scales for imitation learning and promotes reinforcement learning exploration. Extensive experiments on Atari games demonstrate that RAPL improves the state-of-the-art imitation learning method SQIL and reinforcement learning method A2C. Further, we show that discovered routines can generalize to unseen levels and difficulties on the CoinRun benchmark.

Downloads

Published

2021-05-18

How to Cite

Zhao, Z., Gan, C., Wu, J., Guo, X., & Tenenbaum, J. B. (2021). Augmenting Policy Learning with Routines Discovered from a Single Demonstration. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 11024-11032. https://doi.org/10.1609/aaai.v35i12.17316

Issue

Section

AAAI Technical Track on Machine Learning V