JPV-Net: Joint Point-Voxel Representations for Accurate 3D Object Detection
Keywords:Computer Vision (CV)
AbstractVoxel and point representations are widely applied in recent 3D object detection tasks from LiDAR point clouds. Voxel representations contribute to efficiently and rapidly locating objects, whereas point representations are capable of describing intra-object spatial relationship for detection refinement. In this work, we aim to exploit the strengths of both two representations, and present a novel two-stage detector, named Joint Point-Voxel Network (JPV-Net). Specifically, our framework is equipped with a Dual Encoders-Fusion Decoder, which consists of the dual encoders to extract voxel features of sketchy 3D scenes and point features rich in geometric context, respectively, and the Feature Propagation Fusion (FP-Fusion) decoder to attentively fuse them from coarse to fine. By making use of the advantages of these features, the refinement network can effectively eliminate false detection and provide better accuracy. Besides, to further develop the perception characteristics of voxel CNN and point backbone, we design two novel intersection-over-union (IoU) estimation modules for proposal generation and refinement, both of which can alleviate the misalignment between the localization and the classification confidence. Extensive experiments on the KITTI dataset and ONCE dataset demonstrate that our proposed JPV-Net outperforms other state-of-the-art methods with remarkable margins.
How to Cite
Song, N., Jiang, T., & Yao, J. (2022). JPV-Net: Joint Point-Voxel Representations for Accurate 3D Object Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2), 2271-2279. https://doi.org/10.1609/aaai.v36i2.20125
AAAI Technical Track on Computer Vision II