VELMA: Verbalization Embodiment of LLM Agents for Vision and Language Navigation in Street View

Authors

  • Raphael Schumann Computational Linguistics, Heidelberg University, Germany
  • Wanrong Zhu University of California, Santa Barbara
  • Weixi Feng University of California, Santa Barbara
  • Tsu-Jui Fu University of California, Santa Barbara
  • Stefan Riezler Computational Linguistics, Heidelberg University, Germany IWR, Heidelberg University, Germany
  • William Yang Wang University of California, Santa Barbara

DOI:

https://doi.org/10.1609/aaai.v38i17.29858

Keywords:

NLP: Language Grounding & Multi-modal NLP, CV: Language and Vision

Abstract

Incremental decision making in real-world environments is one of the most challenging tasks in embodied artificial intelligence. One particularly demanding scenario is Vision and Language Navigation (VLN) which requires visual and natural language understanding as well as spatial and temporal reasoning capabilities. The embodied agent needs to ground its understanding of navigation instructions in observations of a real-world environment like Street View. Despite the impressive results of LLMs in other research areas, it is an ongoing problem of how to best connect them with an interactive visual environment. In this work, we propose VELMA, an embodied LLM agent that uses a verbalization of the trajectory and of visual environment observations as contextual prompt for the next action. Visual information is verbalized by a pipeline that extracts landmarks from the human written navigation instructions and uses CLIP to determine their visibility in the current panorama view. We show that VELMA is able to successfully follow navigation instructions in Street View with only two in-context examples. We further finetune the LLM agent on a few thousand examples and achieve around 25% relative improvement in task completion over the previous state-of-the-art for two datasets.

Published

2024-03-24

How to Cite

Schumann, R., Zhu, W., Feng, W., Fu, T.-J., Riezler, S., & Wang, W. Y. (2024). VELMA: Verbalization Embodiment of LLM Agents for Vision and Language Navigation in Street View. Proceedings of the AAAI Conference on Artificial Intelligence, 38(17), 18924–18933. https://doi.org/10.1609/aaai.v38i17.29858

Issue

Section

AAAI Technical Track on Natural Language Processing II