Improving Long-Horizon Imitation through Instruction Prediction

Authors

  • Joey Hejna Stanford University
  • Pieter Abbeel UC Berkeley
  • Lerrel Pinto New York University

DOI:

https://doi.org/10.1609/aaai.v37i7.25951

Keywords:

ML: Imitation Learning & Inverse Reinforcement Learning, ROB: Cognitive Robotics

Abstract

Complex, long-horizon planning and its combinatorial nature pose steep challenges for learning-based agents. Difficulties in such settings are exacerbated in low data regimes where over-fitting stifles generalization and compounding errors hurt accuracy. In this work, we explore the use of an often unused source of auxiliary supervision: language. Inspired by recent advances in transformer-based models, we train agents with an instruction prediction loss that encourages learning temporally extended representations that operate at a high level of abstraction. Concretely, we demonstrate that instruction modeling significantly improves performance in planning environments when training with a limited number of demonstrations on the BabyAI and Crafter benchmarks. In further analysis we find that instruction modeling is most important for tasks that require complex reasoning, while understandably offering smaller gains in environments that require simple plans. More details and code can be found at \url{https://github.com/jhejna/instruction-prediction}.

Downloads

Published

2023-06-26

How to Cite

Hejna, J., Abbeel, P., & Pinto, L. (2023). Improving Long-Horizon Imitation through Instruction Prediction. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 7857-7865. https://doi.org/10.1609/aaai.v37i7.25951

Issue

Section

AAAI Technical Track on Machine Learning II