Dual Mapping of 2D StyleGAN for 3D-Aware Image Generation and Manipulation (Student Abstract)
DOI:
https://doi.org/10.1609/aaai.v38i21.30428Keywords:
3D-aware GAN, Pretrained GAN, Image ManipulationAbstract
3D-aware GANs successfully solve the problem of 3D-consistency generation and furthermore provide a 3D shape of the generated object. However, the application of the volume renderer disturbs the disentanglement of the latent space, which makes it difficult to manipulate 3D-aware GANs and lowers the image quality of style-based generators. In this work, we devise a dual-mapping framework to make the generated images of pretrained 2D StyleGAN consistent in 3D space. We utilize a tri-plane representation to estimate the 3D shape of the generated object and two mapping networks to bridge the latent space of StyleGAN and the 3D tri-plane space. Our method does not alter the parameters of the pretrained generator, which means the interpretability of latent space is preserved for various image manipulations. Experiments show that our method lifts the 3D awareness of pretrained 2D StyleGAN to 3D-aware GANs and outperforms the 3D-aware GANs in controllability and image quality.Downloads
Published
2024-03-24
How to Cite
Chen, Z., Zhao, H., Wang, C., Yuan, B., & Li, X. (2024). Dual Mapping of 2D StyleGAN for 3D-Aware Image Generation and Manipulation (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23458-23459. https://doi.org/10.1609/aaai.v38i21.30428
Issue
Section
AAAI Student Abstract and Poster Program