FM-OV3D: Foundation Model-Based Cross-Modal Knowledge Blending for Open-Vocabulary 3D Detection

Authors

  • Dongmei Zhang Peking University
  • Chang Li Peking University
  • Renrui Zhang The Chinese University of Hong Kong
  • Shenghao Xie Wuhan University
  • Wei Xue Hong Kong Univerisity of Science and Technology
  • Xiaodong Xie Peking University
  • Shanghang Zhang Peking University

DOI:

https://doi.org/10.1609/aaai.v38i15.29612

Keywords:

ML: Multimodal Learning, CV: 3D Computer Vision, CV: Language and Vision, ML: Deep Neural Architectures and Foundation Models

Abstract

The superior performances of pre-trained foundation models in various visual tasks underscore their potential to enhance the 2D models' open-vocabulary ability. Existing methods explore analogous applications in the 3D space. However, most of them only center around knowledge extraction from singular foundation models, which limits the open-vocabulary ability of 3D models. We hypothesize that leveraging complementary pre-trained knowledge from various foundation models can improve knowledge transfer from 2D pre-trained visual language models to the 3D space. In this work, we propose FM-OV3D, a method of Foundation Model-based Cross-modal Knowledge Blending for Open-Vocabulary 3D Detection, which improves the open-vocabulary localization and recognition abilities of 3D model by blending knowledge from multiple pre-trained foundation models, achieving true open-vocabulary without facing constraints from original 3D datasets. Specifically, to learn the open-vocabulary 3D localization ability, we adopt the open-vocabulary localization knowledge of the Grounded-Segment-Anything model. For open-vocabulary 3D recognition ability, We leverage the knowledge of generative foundation models, including GPT-3 and Stable Diffusion models, and cross-modal discriminative models like CLIP. The experimental results on two popular benchmarks for open-vocabulary 3D object detection show that our model efficiently learns knowledge from multiple foundation models to enhance the open-vocabulary ability of the 3D model and successfully achieves state-of-the-art performance in open-vocabulary 3D object detection tasks. Code is released at https://github.com/dmzhang0425/FM-OV3D.git.

Published

2024-03-24

How to Cite

Zhang, D., Li, C., Zhang, R., Xie, S., Xue, W., Xie, X., & Zhang, S. (2024). FM-OV3D: Foundation Model-Based Cross-Modal Knowledge Blending for Open-Vocabulary 3D Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 38(15), 16723-16731. https://doi.org/10.1609/aaai.v38i15.29612

Issue

Section

AAAI Technical Track on Machine Learning VI