Point Cloud Semantic Scene Completion from RGB-D Images


  • Shoulong Zhang Beihang University, Beijing, China
  • Shuai Li Beihang University, Beijing, China Peng Cheng Laboratory, Shenzhen, China
  • Aimin Hao Beihang University, Beijing, China Peng Cheng Laboratory, Shenzhen, China
  • Hong Qin Stony Brook University (SUNY), Stony Brook, USA


3D Computer Vision, Scene Analysis & Understanding, General, Applications


In this paper, we devise a novel semantic completion network, called point cloud semantic scene completion network (PCSSC-Net), for indoor scenes solely based on point clouds. Existing point cloud completion networks still suffer from their inability of fully recovering complex structures and contents from global geometric descriptions neglecting semantic hints. To extract and infer comprehensive information from partial input, we design a patch-based contextual encoder to hierarchically learn point-level, patch-level, and scene-level geometric and contextual semantic information with a divide-and-conquer strategy. Consider that the scene semantics afford a high-level clue of constituting geometry for an indoor scene environment, we articulate a semantics-guided completion decoder where semantics could help cluster isolated points in the latent space and infer complicated scene geometry. Given the fact that real-world scans tend to be incomplete as ground truth, we choose to synthesize scene dataset with RGB-D images and annotate complete point clouds as ground truth for the supervised training purpose. Extensive experiments validate that our new method achieves the state-of-the-art performance, in contrast with the current methods applied to our dataset.




How to Cite

Zhang, S., Li, S., Hao, A., & Qin, H. (2021). Point Cloud Semantic Scene Completion from RGB-D Images. Proceedings of the AAAI Conference on Artificial Intelligence, 35(4), 3385-3393. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16451



AAAI Technical Track on Computer Vision III