Listen, Attend, and Walk: Neural Mapping of Navigational Instructions to Action Sequences

Authors

  • Hongyuan Mei Toyota Technological Institute at Chicago
  • Mohit Bansal Toyota Technological Institute at Chicago
  • Matthew Walter Toyota Technological Institute at Chicago

DOI:

https://doi.org/10.1609/aaai.v30i1.10364

Keywords:

direction following, natural language processing, natural language semantics

Abstract

We propose a neural sequence-to-sequence model for direction following, a task that is essential to realizing effective autonomous agents. Our alignment-based encoder-decoder model with long short-term memory recurrent neural networks (LSTM-RNN) translates natural language instructions to action sequences based upon a representation of the observable world state. We introduce a multi-level aligner that empowers our model to focus on sentence "regions" salient to the current world state by using multiple abstractions of the input sentence. In contrast to existing methods, our model uses no specialized linguistic resources (e.g., parsers) or task-specific annotations (e.g., seed lexicons). It is therefore generalizable, yet still achieves the best results reported to-date on a benchmark single-sentence dataset and competitive results for the limited-training multi-sentence setting. We analyze our model through a series of ablations that elucidate the contributions of the primary components of our model.

Downloads

Published

2016-03-05

How to Cite

Mei, H., Bansal, M., & Walter, M. (2016). Listen, Attend, and Walk: Neural Mapping of Navigational Instructions to Action Sequences. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10364

Issue

Section

Technical Papers: NLP and Machine Learning