UNeMo: Collaborative Visual-Language Reasoning and Navigation via a Multimodal World Model

Authors

  • Changxin Huang Shenzhen University
  • Lv Tang Shenzhen University
  • Zhaohuan Zhan Shenzhen MSU-BIT University
  • Lisha Yu Sun Yat-sen University
  • Runhao Zeng Shenzhen MSU-BIT University
  • Zun Liu Shenzhen University
  • Zhengjie Wang Beijing Institute of Technology
  • Jianqiang Li Shenzhen University

DOI:

https://doi.org/10.1609/aaai.v40i22.38895

Abstract

Vision-and-Language Navigation (VLN) requires agents to autonomously navigate complex environments via visual images and natural language instructions—remains highly challenging. Recent research on enhancing language-guided navigation reasoning using pre-trained large language models (LLMs) has shown promising prospects. However, the reasoning of such methods is limited to the linguistic modality, lacking visual reasoning capabilities. Moreover, existing reasoning modules are optimized separately from navigation policies, leading to incompatibility and potential conflicts in optimization objectives. To tackle these challenges, we introduce UNeMo, a novel framework designed for the collaborative optimization of visual state reasoning and navigational decision-making. It introduces a Multimodal World Model (MWM) that takes visual features, language instructions, and navigational actions as inputs to jointly predict subsequent visual states, enabling cross-modal reasoning. Via a Hierarchical Prediction-Feedback (HPN) mechanism, MWM collaborates with navigation policies: the first layer generates actions using current vision-and-language features; MWM then infers post-action visual states to guide the second layer’s fine-grained decisions. This forms a dynamic bidirectional promotion mechanism where MWM reasoning optimizes navigation policies, while policy decisions feedback to improve MWM’s reasoning accuracy. Experiments on R2R and REVERIE datasets show UNeMo outperforms state-of-the-art methods by 2.1% and 0.7% in navigation accuracy for unseen scenes, validating its effectiveness.

Downloads

Published

2026-03-14

How to Cite

Huang, C., Tang, L., Zhan, Z., Yu, L., Zeng, R., Liu, Z., … Li, J. (2026). UNeMo: Collaborative Visual-Language Reasoning and Navigation via a Multimodal World Model. Proceedings of the AAAI Conference on Artificial Intelligence, 40(22), 18315–18323. https://doi.org/10.1609/aaai.v40i22.38895

Issue

Section

AAAI Technical Track on Intelligent Robotics