Mono3DVG: 3D Visual Grounding in Monocular Images
DOI:
https://doi.org/10.1609/aaai.v38i7.28525Keywords:
CV: Language and Vision, CV: 3D Computer Vision, CV: Multi-modal Vision, CV: Object Detection & Categorization, CV: Scene Analysis & UnderstandingAbstract
We introduce a novel task of 3D visual grounding in monocular RGB images using language descriptions with both appearance and geometry information. Specifically, we build a large-scale dataset, Mono3DRefer, which contains 3D object targets with their corresponding geometric text descriptions, generated by ChatGPT and refined manually. To foster this task, we propose Mono3DVG-TR, an end-to-end transformer-based network, which takes advantage of both the appearance and geometry information in text embeddings for multi-modal learning and 3D object localization. Depth predictor is designed to explicitly learn geometry features. The dual text-guided adapter is proposed to refine multiscale visual and geometry features of the referred object. Based on depth-text-visual stacking attention, the decoder fuses object-level geometric cues and visual appearance into a learnable query. Comprehensive benchmarks and some insightful analyses are provided for Mono3DVG. Extensive comparisons and ablation studies show that our method significantly outperforms all baselines. The dataset and code will be released.Downloads
Published
2024-03-24
How to Cite
Zhan, Y., Yuan, Y., & Xiong, Z. (2024). Mono3DVG: 3D Visual Grounding in Monocular Images. Proceedings of the AAAI Conference on Artificial Intelligence, 38(7), 6988-6996. https://doi.org/10.1609/aaai.v38i7.28525
Issue
Section
AAAI Technical Track on Computer Vision VI