Effective Diffusion Transformer Architecture for Image Super-Resolution

Authors

  • Kun Cheng State Key Laboratory of Integrated Services Networks, Xidian University
  • Lei Yu Huawei Noah's Ark Lab
  • Zhijun Tu Huawei Noah's Ark Lab
  • Xiao He State Key Laboratory of Integrated Services Networks, Xidian University
  • Liyu Chen Huawei Noah's Ark Lab
  • Yong Guo Consumer Business Group, Huawei
  • Mingrui Zhu State Key Laboratory of Integrated Services Networks, Xidian University
  • Nannan Wang State Key Laboratory of Integrated Services Networks, Xidian University
  • Xinbo Gao Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications
  • Jie Hu Huawei Noah's Ark Lab

DOI:

https://doi.org/10.1609/aaai.v39i3.32247

Abstract

Recent advances indicate that diffusion model holds great promise in image super-resolution. While latest methods are primarily based on latent diffusion models with convolutional neural networks, there are few attempts to explore transformers, which have demonstrated remarkable performance in image generation. In this work, we design an effective diffusion transformer for image super resolution (DiT-SR) that achieves the visual quality of prior-based methods, but through a training-from-scratch manner. In practice, DiT-SR leverages an overall U-shaped architecture, and adopts uniform isotropic design for all the transformer blocks across different stages. The former facilitates multi-scale hierarchical feature extraction, while the latter reallocate the computational resources to critical layers to further enhance performance. Moreover, we thoroughly analyze the limitation of the widely used AdaLN, and present a frequency-adaptive time-step conditioning module, enhancing the model's capacity to process distinct frequency information at different time steps. Extensive experiments demonstrate that DiT-SR outperforms the existing training-from-scratch diffusion-based SR methods significantly, and even beats some of the prior-based methods on pretrained Stable Diffusion, proving the superiority of diffusion transformer in image super resolution.

Downloads

Published

2025-04-11

How to Cite

Cheng, K., Yu, L., Tu, Z., He, X., Chen, L., Guo, Y., Zhu, M., Wang, N., Gao, X., & Hu, J. (2025). Effective Diffusion Transformer Architecture for Image Super-Resolution. Proceedings of the AAAI Conference on Artificial Intelligence, 39(3), 2455-2463. https://doi.org/10.1609/aaai.v39i3.32247

Issue

Section

AAAI Technical Track on Computer Vision II