Learning Generative Neural Networks for 3D Colorization

Authors

  • Zhenpei Yang University of Texas at Austin
  • Lihang Liu University of Texas at Austin
  • Qixing Huang University of Texas at Austin

Keywords:

Generative Modeling, Neural Networks, 3D Convolution, Alternative Minimization

Abstract

Automatic generation of 3D visual content is a fundamental problem that sits at the intersection of visual computing and artificial intelligence. So far, most existing works have focused on geometry synthesis. In contrast, advances in automatic synthesis of color information, which conveys rich semantic information of 3D geometry, remain rather limited. In this paper, we propose to learn a generative model that maps a latent color parameter space to a space of colorizations across a shape collection. The colorizations are diverse on each shape and consistent across the shape collection. We introduce an unsupervised approach for training this generative model and demonstrate its effectiveness across a wide range of categories. The key feature of our approach is that it only requires one colorization per shape in the training data, and utilizes a neural network to propagate the color information of other shapes to train the generative model for each particular shape. This characteristics makes our approach applicable to standard internet shape repositories.

Downloads

Published

2018-04-26

How to Cite

Yang, Z., Liu, L., & Huang, Q. (2018). Learning Generative Neural Networks for 3D Colorization. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/11909

Issue

Section

Main Track: Machine Learning Applications