Cycle3D: High-quality and Consistent Image-to-3D Generation via Generation-Reconstruction Cycle

Authors

  • Zhenyu Tang Peking University
  • Junwu Zhang Peking University
  • Xinhua Cheng Peking University
  • Wangbo Yu Peking University
  • Chaoran Feng Peking University
  • Yatian Pang Peking University National University of Singapore
  • Bin Lin Peking University
  • Li Yuan Peking University

DOI:

https://doi.org/10.1609/aaai.v39i7.32787

Abstract

Recent 3D large reconstruction models typically employ a two-stage process, including first generate multi-view images by a multi-view diffusion model, and then utilize a feed-forward model to reconstruct images to 3D content. However, multi-view diffusion models often produce low-quality and inconsistent images, adversely affecting the quality of the final 3D reconstruction. To address this issue, we propose a unified 3D generation framework called Cycle3D, which cyclically utilizes a 2D diffusion-based generation module and a feed-forward 3D reconstruction module during the multi-step diffusion process. Concretely, 2D diffusion model is applied for generating high-quality texture, and the reconstruction model guarantees multi-view consistency. Moreover, 2D diffusion model can further control the generated content and inject reference-view information for unseen views, thereby enhancing the diversity and texture consistency of 3D generation during the denoising process. Extensive experiments demonstrate the superior ability of our method to create 3D content with high-quality and consistency compared with state-of-the-art baselines.

Downloads

Published

2025-04-11

How to Cite

Tang, Z., Zhang, J., Cheng, X., Yu, W., Feng, C., Pang, Y., … Yuan, L. (2025). Cycle3D: High-quality and Consistent Image-to-3D Generation via Generation-Reconstruction Cycle. Proceedings of the AAAI Conference on Artificial Intelligence, 39(7), 7320–7328. https://doi.org/10.1609/aaai.v39i7.32787

Issue

Section

AAAI Technical Track on Computer Vision VI