Semantic 3D-Aware Portrait Synthesis and Manipulation Based on Compositional Neural Radiance Field
DOI:
https://doi.org/10.1609/aaai.v37i2.25278Keywords:
CV: Computational Photography, Image & Video SynthesisAbstract
Recently 3D-aware GAN methods with neural radiance field have developed rapidly. However, current methods model the whole image as an overall neural radiance field, which limits the partial semantic editability of synthetic results. Since NeRF renders an image pixel by pixel, it is possible to split NeRF in the spatial dimension. We propose a Compositional Neural Radiance Field (CNeRF) for semantic 3D-aware portrait synthesis and manipulation. CNeRF divides the image by semantic regions and learns an independent neural radiance field for each region, and finally fuses them and renders the complete image. Thus we can manipulate the synthesized semantic regions independently, while fixing the other parts unchanged. Furthermore, CNeRF is also designed to decouple shape and texture within each semantic region. Compared to state-of-the-art 3D-aware GAN methods, our approach enables fine-grained semantic region manipulation, while maintaining high-quality 3D-consistent synthesis. The ablation studies show the effectiveness of the structure and loss function used by our method. In addition real image inversion and cartoon portrait 3D editing experiments demonstrate the application potential of our method.Downloads
Published
2023-06-26
How to Cite
Ma, T., Li, B., He, Q., Dong, J., & Tan, T. (2023). Semantic 3D-Aware Portrait Synthesis and Manipulation Based on Compositional Neural Radiance Field. Proceedings of the AAAI Conference on Artificial Intelligence, 37(2), 1878-1886. https://doi.org/10.1609/aaai.v37i2.25278
Issue
Section
AAAI Technical Track on Computer Vision II