NavGPT: Explicit Reasoning in Vision-and-Language Navigation with Large Language Models

Authors

  • Gengze Zhou University of Adelaide
  • Yicong Hong Australian National University
  • Qi Wu University of Adelaide

DOI:

https://doi.org/10.1609/aaai.v38i7.28597

Keywords:

CV: Language and Vision, NLP: (Large) Language Models, CV: Vision for Robotics & Autonomous Driving

Abstract

Trained with an unprecedented scale of data, large language models (LLMs) like ChatGPT and GPT-4 exhibit the emergence of significant reasoning abilities from model scaling. Such a trend underscored the potential of training LLMs with unlimited language data, advancing the development of a universal embodied agent. In this work, we introduce the NavGPT, a purely LLM-based instruction-following navigation agent, to reveal the reasoning capability of GPT models in complex embodied scenes by performing zero-shot sequential action prediction for vision-and-language navigation (VLN). At each step, NavGPT takes the textual descriptions of visual observations, navigation history, and future explorable directions as inputs to reason the agent's current status, and makes the decision to approach the target. Through comprehensive experiments, we demonstrate NavGPT can explicitly perform high-level planning for navigation, including decomposing instruction into sub-goals, integrating commonsense knowledge relevant to navigation task resolution, identifying landmarks from observed scenes, tracking navigation progress, and adapting to exceptions with plan adjustment. Furthermore, we show that LLMs is capable of generating high-quality navigational instructions from observations and actions along a path, as well as drawing accurate top-down metric trajectory given the agent's navigation history. Despite the performance of using NavGPT to zero-shot R2R tasks still falling short of trained models, we suggest adapting multi-modality inputs for LLMs to use as visual navigation agents and applying the explicit reasoning of LLMs to benefit learning-based models. Code is available at: https://github.com/GengzeZhou/NavGPT.

Published

2024-03-24

How to Cite

Zhou, G., Hong, Y., & Wu, Q. (2024). NavGPT: Explicit Reasoning in Vision-and-Language Navigation with Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 38(7), 7641-7649. https://doi.org/10.1609/aaai.v38i7.28597

Issue

Section

AAAI Technical Track on Computer Vision VI