AugRefer: Advancing 3D Visual Grounding via Cross-Modal Augmentation and Spatial Relation-based Referring

Authors

  • Xinyi Wang University of Science and Technology of China
  • Na Zhao Singapore University of Technology and Design
  • Zhiyuan Han University of Science and Technology of China
  • Dan Guo Hefei University of Technology
  • Xun Yang University of Science and Technology of China

DOI:

https://doi.org/10.1609/aaai.v39i8.32863

Abstract

3D visual grounding (3DVG), which aims to correlate a natural language description with the target object within a 3D scene, is a significant yet challenging task. Despite recent advancements in this domain, existing approaches commonly encounter a shortage: a limited amount and diversity of text-3D pairs available for training. Moreover, they fall short in effectively leveraging different contextual clues (e.g., rich spatial relations within the 3D visual space) for grounding. To address these limitations, we propose AugRefer, a novel approach for advancing 3D visual grounding. AugRefer introduces cross-modal augmentation designed to extensively generate diverse text-3D pairs by placing objects into 3D scenes and creating accurate and semantically rich descriptions using foundation models. Notably, the resulting pairs can be utilized by any existing 3DVG methods for enriching their training data. Besides, AugRefer presents a language-spatial adaptive decoder that effectively adapts the potential referring objects based on the language description and various 3D spatial relations. Extensive experiments on three benchmark datasets clearly validate the effectiveness of AugRefer.

Downloads

Published

2025-04-11

How to Cite

Wang, X., Zhao, N., Han, Z., Guo, D., & Yang, X. (2025). AugRefer: Advancing 3D Visual Grounding via Cross-Modal Augmentation and Spatial Relation-based Referring. Proceedings of the AAAI Conference on Artificial Intelligence, 39(8), 8006-8014. https://doi.org/10.1609/aaai.v39i8.32863

Issue

Section

AAAI Technical Track on Computer Vision VII