Mono3DVG-EnSD: Enhanced Spatial-aware and Dimension-decoupled Text Encoding for Monocular 3D Visual Grounding

Authors

  • Yuzhen Li Hunan University
  • Min Liu Hunan University
  • Zhaoyang Li Hunan University
  • Yuan Bian Hunan University
  • Xueping Wang Hunan Normal University
  • Erbo Zhai Hunan University
  • Yaonan Wang Hunan University

DOI:

https://doi.org/10.1609/aaai.v40i8.37604

Abstract

Monocular 3D Visual Grounding (Mono3DVG) is an emerging task that locates 3D objects in RGB images using text descriptions with geometric cues. However, existing methods face two key limitations. Firstly, they often over-rely on high-certainty keywords that explicitly identify the target object while neglecting critical spatial descriptions. Secondly, generalized textual features contain both 2D and 3D descriptive information, thereby capturing an additional dimension of details compared to singular 2D or 3D visual features. This characteristic leads to cross-dimensional interference when refining visual features under text guidance. To overcome these challenges, we propose Mono3DVG-EnSD, a novel framework that integrates two key components: the CLIP-Guided Lexical Certainty Adapter (CLIP-LCA) and the Dimension-Decoupled Module (D2M). The CLIP-LCA dynamically masks high-certainty keywords while retaining low-certainty implicit spatial descriptions, thereby forcing the model to develop a deeper understanding of spatial relationships in captions for object localization. Meanwhile, the D2M decouples dimension-specific (2D/3D) textual features from generalized textual features to guide corresponding visual features at same dimension, which mitigates cross-dimensional interference by ensuring dimensionally-consistent cross-modal interactions. Through comprehensive comparisons and ablation studies on the Mono3DRefer dataset, our method achieves state-of-the-art (SOTA) performance across all metrics. Notably, it improves the challenging Far(Acc@0.5) scenario by a significant +13.54%.

Downloads

Published

2026-03-14

How to Cite

Li, Y., Liu, M., Li, Z., Bian, Y., Wang, X., Zhai, E., & Wang, Y. (2026). Mono3DVG-EnSD: Enhanced Spatial-aware and Dimension-decoupled Text Encoding for Monocular 3D Visual Grounding. Proceedings of the AAAI Conference on Artificial Intelligence, 40(8), 6726–6734. https://doi.org/10.1609/aaai.v40i8.37604

Issue

Section

AAAI Technical Track on Computer Vision V