ResDiff: Combining CNN and Diffusion Model for Image Super-resolution
DOI:
https://doi.org/10.1609/aaai.v38i8.28746Keywords:
DMKM: Mining of Visual, Multimedia & Multimodal Data, DMKM: ApplicationsAbstract
Adapting the Diffusion Probabilistic Model (DPM) for direct image super-resolution is wasteful, given that a simple Convolutional Neural Network (CNN) can recover the main low-frequency content. Therefore, we present ResDiff, a novel Diffusion Probabilistic Model based on Residual structure for Single Image Super-Resolution (SISR). ResDiff utilizes a combination of a CNN, which restores primary low-frequency components, and a DPM, which predicts the residual between the ground-truth image and the CNN predicted image. In contrast to the common diffusion-based methods that directly use LR space to guide the noise towards HR space, ResDiff utilizes the CNN’s initial prediction to direct the noise towards the residual space between HR space and CNN-predicted space, which not only accelerates the generation process but also acquires superior sample quality. Additionally, a frequency-domain-based loss function for CNN is introduced to facilitate its restoration, and a frequency-domain guided diffusion is designed for DPM on behalf of predicting high-frequency details. The extensive experiments on multiple benchmark datasets demonstrate that ResDiff outperforms previous diffusion based methods in terms of shorter model convergence time, superior generation quality, and more diverse samples.Downloads
Published
2024-03-24
How to Cite
Shang, S., Shan, Z., Liu, G., Wang, L., Wang, X., Zhang, Z., & Zhang, J. (2024). ResDiff: Combining CNN and Diffusion Model for Image Super-resolution. Proceedings of the AAAI Conference on Artificial Intelligence, 38(8), 8975-8983. https://doi.org/10.1609/aaai.v38i8.28746
Issue
Section
AAAI Technical Track on Data Mining & Knowledge Management