3D-TOGO: Towards Text-Guided Cross-Category 3D Object Generation

Authors

  • Zutao Jiang School of Software Engineering, Xi’an Jiaotong University PengCheng Laboratory
  • Guansong Lu Huawei Noah's Ark Lab
  • Xiaodan Liang Sun Yat-sen University MBZUAI
  • Jihua Zhu Xi'an Jiaotong University
  • Wei Zhang Huawei Noah's Ark Lab
  • Xiaojun Chang ReLER, AAII, University of Technology Sydney
  • Hang Xu Huawei Noah's Ark Lab

DOI:

https://doi.org/10.1609/aaai.v37i1.25186

Keywords:

CV: Multi-modal Vision

Abstract

This article has been updated and an error has been fixed in published paper. An Erratum to this article was published on 6 September 2023.

Text-guided 3D object generation aims to generate 3D objects described by user-defined captions, which paves a flexible way to visualize what we imagined. Although some works have been devoted to solving this challenging task, these works either utilize some explicit 3D representations (e.g., mesh), which lack texture and require post-processing for rendering photo-realistic views; or require individual time-consuming optimization for every single case. Here, we make the first attempt to achieve generic text-guided cross-category 3D object generation via a new 3D-TOGO model, which integrates a text-to-views generation module and a views-to-3D generation module. The text-to-views generation module is designed to generate different views of the target 3D object given an input caption. prior-guidance, caption-guidance and view contrastive learning are proposed for achieving better view-consistency and caption similarity. Meanwhile, a pixelNeRF model is adopted for the views-to-3D generation module to obtain the implicit 3D neural representation from the previously-generated views. Our 3D-TOGO model generates 3D objects in the form of the neural radiance field with good texture and requires no time-cost optimization for every single caption. Besides, 3D-TOGO can control the category, color and shape of generated 3D objects with the input caption. Extensive experiments on the largest 3D object dataset (i.e., ABO) are conducted to verify that 3D-TOGO can better generate high-quality 3D objects according to the input captions across 98 different categories, in terms of PSNR, SSIM, LPIPS and CLIP-score, compared with text-NeRF and Dreamfields.

Downloads

Published

2023-06-26 — Updated on 2023-09-06

Versions

How to Cite

Jiang, Z., Lu, G., Liang, X., Zhu, J., Zhang, W., Chang, X., & Xu, H. (2023). 3D-TOGO: Towards Text-Guided Cross-Category 3D Object Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1), 1051-1059. https://doi.org/10.1609/aaai.v37i1.25186 (Original work published June 26, 2023)

Issue

Section

AAAI Technical Track on Computer Vision I