UrbanNav: Learning Language-Guided Embodied Urban Navigation from Web-Scale Human Trajectories

Authors

  • Yanghong Mei Institute of Automation, Chinese Academy of Sciences School of Artificial Intelligence, University of Chinese Academy of Sciences
  • Yirong Yang Beihang University
  • Longteng Guo Institute of Automation, Chinese Academy of Sciences
  • Qunbo Wang Beijing Jiaotong University
  • Ming-Ming Yu Beihang University
  • Xingjian He Institute of Automation, Chinese Academy of Sciences
  • Wenjun Wu Beihang University Hangzhou International Innovation Institute
  • Jing Liu Institute of Automation, Chinese Academy of Sciences School of Artificial Intelligence, University of Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v40i22.38916

Abstract

Navigating complex urban environments using natural language instructions poses significant challenges for embodied agents, including noisy language instructions, ambiguous spatial references, diverse landmarks, and dynamic street scenes. Current visual navigation methods are typically limited to simulated or off-street environments, and often rely on precise goal formats, such as specific coordinates or images. This limits their effectiveness for autonomous agents like last-mile delivery robots navigating unfamiliar cities. To address these limitations, we introduce UrbanNav, a scalable framework that trains embodied agents to follow free-form language instructions in diverse urban settings. Leveraging web-scale city walking videos, we develop an scalable annotation pipeline that aligns human navigation trajectories with language instructions grounded in real-world landmarks. UrbanNav encompasses over 1,500 hours of navigation data and 3 million instruction-trajectory-landmark triplets, capturing a wide range of urban scenarios. Our model learns robust navigation policies to tackle complex urban scenarios, demonstrating superior spatial reasoning, robustness to noisy instructions, and generalization to unseen urban settings. Experimental results show that UrbanNav significantly outperforms existing methods, highlighting the potential of large-scale web video data to enable language-guided, real-world urban navigation for embodied agents.

Downloads

Published

2026-03-14

How to Cite

Mei, Y., Yang, Y., Guo, L., Wang, Q., Yu, M.-M., He, X., … Liu, J. (2026). UrbanNav: Learning Language-Guided Embodied Urban Navigation from Web-Scale Human Trajectories. Proceedings of the AAAI Conference on Artificial Intelligence, 40(22), 18505–18513. https://doi.org/10.1609/aaai.v40i22.38916

Issue

Section

AAAI Technical Track on Intelligent Robotics