Leveraging Imagery Data with Spatial Point Prior for Weakly Semi-supervised 3D Object Detection

Authors

  • Hongzhi Gao University of Science and Technology of China
  • Zheng Chen University of Science and Technology of China
  • Zehui Chen University of Science and Technology of China
  • Lin Chen University of Science and Technology of China
  • Jiaming Liu Peking University
  • Shanghang Zhang Peking University
  • Feng Zhao University of Science and Technology of China

DOI:

https://doi.org/10.1609/aaai.v38i3.27948

Keywords:

CV: Object Detection & Categorization, CV: Multi-modal Vision, CV: Vision for Robotics & Autonomous Driving, ROB: Multimodal Perception & Sensor Fusion

Abstract

Training high-accuracy 3D detectors necessitates massive labeled 3D annotations with 7 degree-of-freedom, which is laborious and time-consuming. Therefore, the form of point annotations is proposed to offer significant prospects for practical applications in 3D detection, which is not only more accessible and less expensive but also provides strong spatial information for object localization. In this paper, we empirically discover that it is non-trivial to merely adapt Point-DETR to its 3D form, encountering two main bottlenecks: 1) it fails to encode strong 3D prior into the model, and 2) it generates low-quality pseudo labels in distant regions due to the extreme sparsity of LiDAR points. To overcome these challenges, we introduce Point-DETR3D, a teacher-student framework for weakly semi-supervised 3D detection, designed to fully capitalize on point-wise supervision within a constrained instance-wise annotation budget. Different from Point-DETR which encodes 3D positional information solely through a point encoder, we propose an explicit positional query initialization strategy to enhance the positional prior. Considering the low quality of pseudo labels at distant regions produced by the teacher model, we enhance the detector's perception by incorporating dense imagery data through a novel Cross-Modal Deformable RoI Fusion (D-RoI). Moreover, an innovative point-guided self-supervised learning technique is proposed to allow for fully exploiting point priors, even in student models. Extensive experiments on representative nuScenes dataset demonstrate our Point-DETR3D obtains significant improvements compared to previous works. Notably, with only 5% of labeled data, Point-DETR3D achieves over 90% performance of its fully supervised counterpart.

Published

2024-03-24

How to Cite

Gao, H., Chen, Z., Chen, Z., Chen, L., Liu, J., Zhang, S., & Zhao, F. (2024). Leveraging Imagery Data with Spatial Point Prior for Weakly Semi-supervised 3D Object Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 38(3), 1797-1805. https://doi.org/10.1609/aaai.v38i3.27948

Issue

Section

AAAI Technical Track on Computer Vision II