STEP-Nav: Spatial-Temporal Efficient Visual Token Pruning for Vision-and-Language Navigation with Large Language Models

Authors

  • Yantao Lu Northwest Polytechnical University Xi'an
  • Shiqi Sun Northwest Polytechnical University Xi'an
  • Ning Liu Beijing Innovation Center of Humanoid Robotics
  • Bo Jiang Didi Chuxing
  • Ying Zhang Northwest Polytechnical University Xi'an
  • Jinchao Chen Northwest Polytechnical University Xi'an
  • Chenglie Du Northwest Polytechnical University Xi'an

DOI:

https://doi.org/10.1609/aaai.v40i29.39588

Abstract

Vision-and-Language Navigation (VLN) plays a critical role in tasks of embodied AI, particularly in unseen environments following natural language instructions. Recent advancements leverage large language models (LLMs) to improve the accuracy and generalizability of VLN systems by encoding image sequences as dense token representations. However, this tokenization approach incurs substantial computational overhead due to two key inefficiencies: 1) ego-centric camera views often include navigation-irrelevant re- gions (e.g., sky or distant backgrounds), and 2) high-frame-rate image sequences introduce temporal redundancy. To address these challenges, we propose Spatial-Temporal Efficient Visual Token Pruning (STEP-Nav), a unified frame- work that simultaneously prunes redundant visual tokens and fine-tunes VLN models to preserve navigation performance. In particular, STEP-Nav incorporates a distance- and content-aware token evaluation mechanism to remove irrelevant tokens at the spatial level, along with temporal level similarity-based filtering to reduce redundancy across sequential frames. To ensure pruning does not harm task performance, we introduce a distortion-aware fine-tuning strategy that aligns pruned-token representations with their full-token counterparts while maintaining navigation accuracy. Experiments on the R2R and RxR benchmarks using Navid-CE and NavGPT-2 as base models demonstrate that STEP-Nav preserves over 95% of the performance while reducing 66.7% of tokens, outperforming existing token pruning baselines.

Downloads

Published

2026-03-14

How to Cite

Lu, Y., Sun, S., Liu, N., Jiang, B., Zhang, Y., Chen, J., & Du, C. (2026). STEP-Nav: Spatial-Temporal Efficient Visual Token Pruning for Vision-and-Language Navigation with Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 40(29), 24097-24105. https://doi.org/10.1609/aaai.v40i29.39588

Issue

Section

AAAI Technical Track on Machine Learning VI