Single View Point Cloud Generation via Unified 3D Prototype

Authors

  • Yu Lin University of Texas at Dallas
  • Yigong Wang University of Texas at Dallas
  • Yi-Fan Li University of Texas at Dallas
  • Zhuoyi Wang University of Texas at Dallas
  • Yang Gao University of Texas at Dallas
  • Latifur Khan University of Texas at Dallas

Keywords:

3D Computer Vision, (Deep) Neural Network Algorithms

Abstract

As 3D point clouds become the representation of choice for multiple vision and graphics applications, such as autonomous driving, robotics, etc., the generation of them by deep neural networks has attracted increasing attention in the research community. Despite the recent success of deep learning models in classification and segmentation, synthesizing point clouds remains challenging, especially from a single image. State-of-the-art (SOTA) approaches can generate a point cloud from a hidden vector, however, they treat 2D and 3D features equally and disregard the rich shape information within the 3D data. In this paper, we address this problem by integrating image features with 3D prototype features. Specifically, we propose to learn a set of 3D prototype features from a real point cloud dataset and dynamically adjust them through the training. These prototypes are then integrated with incoming image features to guide the point cloud generation process. Experimental results show that our proposed method outperforms SOTA methods on single image based 3D reconstruction tasks.

Downloads

Published

2021-05-18

How to Cite

Lin, Y., Wang, Y., Li, Y.-F., Wang, Z., Gao, Y., & Khan, L. (2021). Single View Point Cloud Generation via Unified 3D Prototype. Proceedings of the AAAI Conference on Artificial Intelligence, 35(3), 2064-2072. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16303

Issue

Section

AAAI Technical Track on Computer Vision II