LPCG: A Self-conditional Architecture for Labeled Point Cloud Generation
DOI:
https://doi.org/10.1609/aaai.v39i4.32378Abstract
Recently, there has been considerable exploration of methods for generating 3D point clouds, which is crucial for numerous 3D vision applications. Though conditional generation methods show promising performance, it depends on the additional paired label. On the other hand, unconditional generation methods usually fail to annotate the generated 3D point cloud. In this paper, we introduce a novel self-conditional architecture that trains on unlabeled data and then generates high-quality labeled 3D point clouds. Specifically, we design a module to extract geometry and view features, and then use a feature fusion module to integrate them as a substitute for label embedding in conditional point cloud generation. Then the point cloud generator is trained using the fused features. LPCG also harnesses CLIP to handle the view features of point clouds for generating label information. Besides, we train two feature diffusion modules to capture the essence of multimodal features and obtain diverse fused features for use as conditions in generating point clouds. Experiments on the ShapeNet dataset demonstrate that LPCG achieves state-of-the-art performance for single class generation. Our experimental results show that the accuracy of our generated label annotations reaches around 97.44% for a two-class generation task.Downloads
Published
2025-04-11
How to Cite
Huang, D., Huang, X., Zhang, C., & Shi, Y. (2025). LPCG: A Self-conditional Architecture for Labeled Point Cloud Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 39(4), 3635-3643. https://doi.org/10.1609/aaai.v39i4.32378
Issue
Section
AAAI Technical Track on Computer Vision III