Inverse Reinforcement Learning with Natural Language Goals

Authors

  • Li Zhou Amazon
  • Kevin Small Amazon

DOI:

https://doi.org/10.1609/aaai.v35i12.17326

Keywords:

Imitation Learning & Inverse Reinforcement Learning, Language and Vision, Reinforcement Learning, Language Grounding & Multi-modal NLP

Abstract

Humans generally use natural language to communicate task requirements to each other. Ideally, natural language should also be usable for communicating goals to autonomous machines (e.g., robots) to minimize friction in task specification. However, understanding and mapping natural language goals to sequences of states and actions is challenging. Specifically, existing work along these lines has encountered difficulty in generalizing learned policies to new natural language goals and environments. In this paper, we propose a novel adversarial inverse reinforcement learning algorithm to learn a language-conditioned policy and reward function. To improve generalization of the learned policy and reward function, we use a variational goal generator to relabel trajectories and sample diverse goals during training. Our algorithm outperforms multiple baselines by a large margin on a vision-based natural language instruction following dataset (Room-2-Room), demonstrating a promising advance in enabling the use of natural language instructions in specifying agent goals.

Downloads

Published

2021-05-18

How to Cite

Zhou, L., & Small, K. (2021). Inverse Reinforcement Learning with Natural Language Goals. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 11116-11124. https://doi.org/10.1609/aaai.v35i12.17326

Issue

Section

AAAI Technical Track on Machine Learning V