Contrastive Instruction-Trajectory Learning for Vision-Language Navigation


  • Xiwen Liang Shenzhen Campus of Sun Yat-sen University
  • Fengda Zhu Monash University
  • Yi Zhu Huawei Noah’s Ark Lab
  • Bingqian Lin Shenzhen Campus of Sun Yat-sen University
  • Bing Wang Alibaba Group
  • Xiaodan Liang Shenzhen Campus of Sun Yat-sen University



Computer Vision (CV)


The vision-language navigation (VLN) task requires an agent to reach a target with the guidance of natural language instruction. Previous works learn to navigate step-by-step following an instruction. However, these works may fail to discriminate the similarities and discrepancies across instruction-trajectory pairs and ignore the temporal continuity of sub-instructions. These problems hinder agents from learning distinctive vision-and-language representations, harming the robustness and generalizability of the navigation policy. In this paper, we propose a Contrastive Instruction-Trajectory Learning (CITL) framework that explores invariance across similar data samples and variance across different ones to learn distinctive representations for robust navigation. Specifically, we propose: (1) a coarse-grained contrastive learning objective to enhance vision-and-language representations by contrasting semantics of full trajectory observations and instructions, respectively; (2) a fine-grained contrastive learning objective to perceive instructions by leveraging the temporal information of the sub-instructions; (3) a pairwise sample-reweighting mechanism for contrastive learning to mine hard samples and hence mitigate the influence of data sampling bias in contrastive learning. Our CITL can be easily integrated with VLN backbones to form a new learning paradigm and achieve better generalizability in unseen environments. Extensive experiments show that the model with CITL surpasses the previous state-of-the-art methods on R2R, R4R, and RxR.




How to Cite

Liang, X., Zhu, F., Zhu, Y., Lin, B., Wang, B., & Liang, X. (2022). Contrastive Instruction-Trajectory Learning for Vision-Language Navigation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2), 1592-1600.



AAAI Technical Track on Computer Vision II