CRIN: Rotation-Invariant Point Cloud Analysis and Rotation Estimation via Centrifugal Reference Frame
DOI:
https://doi.org/10.1609/aaai.v37i2.25271Keywords:
CV: 3D Computer Vision, CV: Representation Learning for Vision, CV: Object Detection & Categorization, CV: SegmentationAbstract
Various recent methods attempt to implement rotation-invariant 3D deep learning by replacing the input coordinates of points with relative distances and angles. Due to the incompleteness of these low-level features, they have to undertake the expense of losing global information. In this paper, we propose the CRIN, namely Centrifugal Rotation-Invariant Network. CRIN directly takes the coordinates of points as input and transforms local points into rotation-invariant representations via centrifugal reference frames. Aided by centrifugal reference frames, each point corresponds to a discrete rotation so that the information of rotations can be implicitly stored in point features. Unfortunately, discrete points are far from describing the whole rotation space. We further introduce a continuous distribution for 3D rotations based on points. Furthermore, we propose an attention-based down-sampling strategy to sample points invariant to rotations. A relation module is adopted at last for reinforcing the long-range dependencies between sampled points and predicts the anchor point for unsupervised rotation estimation. Extensive experiments show that our method achieves rotation invariance, accurately estimates the object rotation, and obtains state-of-the-art results on rotation-augmented classification and part segmentation. Ablation studies validate the effectiveness of the network design.Downloads
Published
2023-06-26
How to Cite
Lou, Y., Ye, Z., You, Y., Jiang, N., Lu, J., Wang, W., Ma, L., & Lu, C. (2023). CRIN: Rotation-Invariant Point Cloud Analysis and Rotation Estimation via Centrifugal Reference Frame. Proceedings of the AAAI Conference on Artificial Intelligence, 37(2), 1817-1825. https://doi.org/10.1609/aaai.v37i2.25271
Issue
Section
AAAI Technical Track on Computer Vision II