Contrastive Predictive Autoencoders for Dynamic Point Cloud Self-Supervised Learning
DOI:
https://doi.org/10.1609/aaai.v37i8.26170Keywords:
ML: Unsupervised & Self-Supervised Learning, CV: 3D Computer Vision, CV: Video Understanding & Activity Analysis, CV: Biometrics, Face, Gesture & PoseAbstract
We present a new self-supervised paradigm on point cloud sequence understanding. Inspired by the discriminative and generative self-supervised methods, we design two tasks, namely point cloud sequence based Contrastive Prediction and Reconstruction (CPR), to collaboratively learn more comprehensive spatiotemporal representations. Specifically, dense point cloud segments are first input into an encoder to extract embeddings. All but the last ones are then aggregated by a context-aware autoregressor to make predictions for the last target segment. Towards the goal of modeling multi-granularity structures, local and global contrastive learning are performed between predictions and targets. To further improve the generalization of representations, the predictions are also utilized to reconstruct raw point cloud sequences by a decoder, where point cloud colorization is employed to discriminate against different frames. By combining classic contrast and reconstruction paradigms, it makes the learned representations with both global discrimination and local perception. We conduct experiments on four point cloud sequence benchmarks, and report the results on action recognition and gesture recognition under multiple experimental settings. The performances are comparable with supervised methods and show powerful transferability.Downloads
Published
2023-06-26
How to Cite
Sheng, X., Shen, Z., & Xiao, G. (2023). Contrastive Predictive Autoencoders for Dynamic Point Cloud Self-Supervised Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9802-9810. https://doi.org/10.1609/aaai.v37i8.26170
Issue
Section
AAAI Technical Track on Machine Learning III