Towards Compact 3D Representations via Point Feature Enhancement Masked Autoencoders

Authors

  • Yaohua Zha Tsinghua Shenzhen International Graduate School, Tsinghua University Research Center of Artificial Intelligence, Peng Cheng Laboratory
  • Huizhen Ji Tsinghua Shenzhen International Graduate School, Tsinghua University
  • Jinmin Li Tsinghua Shenzhen International Graduate School, Tsinghua University
  • Rongsheng Li Tsinghua Shenzhen International Graduate School, Tsinghua University
  • Tao Dai College of Computer Science and Software Engineering, Shenzhen University
  • Bin Chen Harbin Institute of Technology, Shenzhen
  • Zhi Wang Tsinghua Shenzhen International Graduate School, Tsinghua University
  • Shu-Tao Xia Tsinghua Shenzhen International Graduate School, Tsinghua University Research Center of Artificial Intelligence, Peng Cheng Laboratory

DOI:

https://doi.org/10.1609/aaai.v38i7.28522

Keywords:

CV: 3D Computer Vision, ML: Unsupervised & Self-Supervised Learning

Abstract

Learning 3D representation plays a critical role in masked autoencoder (MAE) based pre-training methods for point cloud, including single-modal and cross-modal based MAE. Specifically, although cross-modal MAE methods learn strong 3D representations via the auxiliary of other modal knowledge, they often suffer from heavy computational burdens and heavily rely on massive cross-modal data pairs that are often unavailable, which hinders their applications in practice. Instead, single-modal methods with solely point clouds as input are preferred in real applications due to their simplicity and efficiency. However, such methods easily suffer from limited 3D representations with global random mask input. To learn compact 3D representations, we propose a simple yet effective Point Feature Enhancement Masked Autoencoders (Point-FEMAE), which mainly consists of a global branch and a local branch to capture latent semantic features. Specifically, to learn more compact features, a share-parameter Transformer encoder is introduced to extract point features from the global and local unmasked patches obtained by global random and local block mask strategies, followed by a specific decoder to reconstruct. Meanwhile, to further enhance features in the local branch, we propose a Local Enhancement Module with local patch convolution to perceive fine-grained local context at larger scales. Our method significantly improves the pre-training efficiency compared to cross-modal alternatives, and extensive downstream experiments underscore the state-of-the-art effectiveness, particularly outperforming our baseline (Point-MAE) by 5.16%, 5.00%, and 5.04% in three variants of ScanObjectNN, respectively. Code is available at https://github.com/zyh16143998882/AAAI24-PointFEMAE.

Published

2024-03-24

How to Cite

Zha, Y., Ji, H., Li, J., Li, R., Dai, T., Chen, B., Wang, Z., & Xia, S.-T. (2024). Towards Compact 3D Representations via Point Feature Enhancement Masked Autoencoders. Proceedings of the AAAI Conference on Artificial Intelligence, 38(7), 6962-6970. https://doi.org/10.1609/aaai.v38i7.28522

Issue

Section

AAAI Technical Track on Computer Vision VI