SayNav: Grounding Large Language Models for Dynamic Planning to Navigation in New Environments

Authors

  • Abhinav Rajvanshi SRI International, 201 Washington Rd, Princeton, NJ 08540, USA
  • Karan Sikka SRI International, 201 Washington Rd, Princeton, NJ 08540, USA
  • Xiao Lin SRI International, 201 Washington Rd, Princeton, NJ 08540, USA
  • Bhoram Lee SRI International, 201 Washington Rd, Princeton, NJ 08540, USA
  • Han-Pang Chiu SRI International, 201 Washington Rd, Princeton, NJ 08540, USA
  • Alvaro Velasquez University of Colorado Boulder, Boulder, CO 80309, USA Defense Advanced Research Projects Agency (DARPA)

DOI:

https://doi.org/10.1609/icaps.v34i1.31506

Abstract

Semantic reasoning and dynamic planning capabilities are crucial for an autonomous agent to perform complex navigation tasks in unknown environments. It requires a large amount of common-sense knowledge, that humans possess, to succeed in these tasks. We present SayNav, a new approach that leverages human knowledge from Large Language Models (LLMs) for efficient generalization to complex navigation tasks in unknown large-scale environments. SayNav uses a novel grounding mechanism, that incrementally builds a 3D scene graph of the explored environment as inputs to LLMs, for generating feasible and contextually appropriate high-level plans for navigation. The LLM-generated plan is then executed by a pre-trained low-level planner, that treats each planned step as a short-distance point-goal navigation sub-task. SayNav dynamically generates step-by-step instructions during navigation and continuously refines future steps based on newly perceived information. We evaluate SayNav on multi-object navigation (MultiON) task, that requires the agent to utilize a massive amount of human knowledge to efficiently search multiple different objects in an unknown environment. We also introduce a benchmark dataset for MultiON task employing ProcTHOR framework that provides large photo-realistic indoor environments with variety of objects. SayNav achieves state-of-the-art results and even outperforms an oracle based baseline with strong ground-truth assumptions by more than 8% in terms of success rate, highlighting its ability to generate dynamic plans for successfully locating objects in large-scale new environments. The code, benchmark dataset and demonstration videos are accessible at https://www.sri.com/ics/computer-vision/saynav.

Downloads

Published

2024-05-30

How to Cite

Rajvanshi, A., Sikka, K., Lin, X., Lee, B., Chiu, H.-P., & Velasquez, A. (2024). SayNav: Grounding Large Language Models for Dynamic Planning to Navigation in New Environments. Proceedings of the International Conference on Automated Planning and Scheduling, 34(1), 464-474. https://doi.org/10.1609/icaps.v34i1.31506