Run, Ruminate, and Regulate: A Dual-process Thinking System for Vision-and-Language Navigation

Authors

  • Yu Zhong State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences (UCAS)
  • Zihao Zhang State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences Institute of AI for Industries (IAII), Chinese Academy of Sciences
  • Rui Zhang State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences
  • Lingdong Huang State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences (UCAS)
  • Haihan Gao State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences University of Science and Technology of China
  • Shuo Wang State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences (UCAS)
  • Da Li Institute of Computing Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences (UCAS)
  • Ruijian Han Department of Data Science and Artificial Intelligence, The Hong Kong Polytechnic University
  • Jiaming Guo State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences
  • Shaohui Peng Institute of Software, Chinese Academy of Sciences
  • Di Huang State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences
  • Yunji Chen State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences (UCAS)

DOI:

https://doi.org/10.1609/aaai.v40i22.38954

Abstract

Vision-and-Language Navigation (VLN) requires an agent to dynamically explore complex 3D environments following human instructions. Recent research underscores the potential of harnessing large language models (LLMs) for VLN, given their commonsense knowledge and general reasoning capabilities. Despite their strengths, a substantial gap in task completion performance persists between LLM-based approaches and domain experts, as LLMs inherently struggle to comprehend real-world spatial correlations precisely; additionally, LLM inference can make the decision-making process considerably inefficient. To address these issues, we propose a novel dual-process thinking framework dubbed R3, integrating LLMs' generalization capabilities with VLN-specific expertise in a zero-shot manner. The framework comprises three core modules: Runner, Ruminator, and Regulator. The Runner is a lightweight transformer-based expert model that ensures efficient and accurate navigation under regular circumstances. The Ruminator employs a multimodal LLM as the backbone and adopts chain-of-thought (CoT) prompting to elicit structured reasoning from the LLM. The Regulator monitors the navigation progress and controls the appropriate thinking mode according to three criteria, integrating Runner and Ruminator harmoniously. Experimental results illustrate that R3 significantly outperforms other state-of-the-art methods, exceeding 3.28% and 3.30% in SPL and RGSPL respectively on the REVERIE benchmark, highlighting the effectiveness of our method in handling challenging VLN tasks.

Downloads

Published

2026-03-14

How to Cite

Zhong, Y., Zhang, Z., Zhang, R., Huang, L., Gao, H., Wang, S., … Chen, Y. (2026). Run, Ruminate, and Regulate: A Dual-process Thinking System for Vision-and-Language Navigation. Proceedings of the AAAI Conference on Artificial Intelligence, 40(22), 18845–18854. https://doi.org/10.1609/aaai.v40i22.38954

Issue

Section

AAAI Technical Track on Intelligent Robotics