Digging into Intrinsic Contextual Information for High-fidelity 3D Point Cloud Completion

Authors

  • Jisheng Chu Harbin Institute of Technology
  • Wenrui Li Harbin Institute of Technology
  • Xingtao Wang Harbin Institute of Technology Harbin Institute of Technology Suzhou Research Institute
  • Kanglin Ning Harbin Institute of Technology Harbin Institute of Technology Suzhou Research Institute
  • Yidan Lu Harbin Institute of Technology
  • Xiaopeng Fan Harbin Institute of Technology Harbin Institute of Technology Suzhou Research Institute Pengcheng Laboratory

DOI:

https://doi.org/10.1609/aaai.v39i3.32260

Abstract

The common occurrence of occlusion-induced incompleteness in point clouds has made point cloud completion (PCC) a highly-concerned task in the field of geometric processing. Existing PCC methods typically produce complete point clouds from partial point clouds in a coarse-to-fine paradigm, with the coarse stage generating entire shapes and the fine stage improving texture details. Though diffusion models have demonstrated effectiveness in the coarse stage, the fine stage still faces challenges in producing high-fidelity results due to the ill-posed nature of PCC. The intrinsic contextual information for texture details in partial point clouds is the key to solving the challenge. In this paper, we propose a high-fidelity PCC method that digs into both short and long-range contextual information from the partial point cloud in the fine stage. Specifically, after generating the coarse point cloud via a diffusion-based coarse generator, a mixed sampling module introduces short-range contextual information from partial point clouds into the fine stage. A surface freezing module safeguards points from noise-free partial point clouds against disruption. As for the long-range contextual information, we design a similarity modeling module to derive similarity with rigid transformation invariance between points, conducting effective matching of geometric manifold features globally. In this way, the high-quality components present in the partial point cloud serve as valuable references to refine the coarse point cloud with high fidelity. Extensive experiments have demonstrated the superiority of the proposed method over SOTA competitors.

Downloads

Published

2025-04-11

How to Cite

Chu, J., Li, W., Wang, X., Ning, K., Lu, Y., & Fan, X. (2025). Digging into Intrinsic Contextual Information for High-fidelity 3D Point Cloud Completion. Proceedings of the AAAI Conference on Artificial Intelligence, 39(3), 2573–2581. https://doi.org/10.1609/aaai.v39i3.32260

Issue

Section

AAAI Technical Track on Computer Vision II