Improving Diffusion-Based Image Restoration with Error Contraction and Error Correction

Authors

  • Qiqi Bao Tsinghua university
  • Zheng Hui Institute for Intelligent computing, alibaba group
  • Rui Zhu City, University of London
  • Peiran Ren Institute for Intelligent computing, alibaba group
  • Xuansong Xie Institute for Intelligent computing, alibaba group
  • Wenming Yang Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v38i2.27833

Keywords:

CV: Low Level & Physics-based Vision, CV: Computational Photography, Image & Video Synthesis

Abstract

Generative diffusion prior captured from the off-the-shelf denoising diffusion generative model has recently attained significant interest. However, several attempts have been made to adopt diffusion models to noisy inverse problems either fail to achieve satisfactory results or require a few thousand iterations to achieve high-quality reconstructions. In this work, we propose a diffusion-based image restoration with error contraction and error correction (DiffECC) method. Two strategies are introduced to contract the restoration error in the posterior sampling process. First, we combine existing CNN-based approaches with diffusion models to ensure data consistency from the beginning. Second, to amplify the error contraction effects of the noise, a restart sampling algorithm is designed. In the error correction strategy, the estimation-correction idea is proposed on both the data term and the prior term. Solving them iteratively within the diffusion sampling framework leads to superior image generation results. Experimental results for image restoration tasks such as super-resolution (SR), Gaussian deblurring, and motion deblurring demonstrate that our approach can reconstruct high-quality images compared with state-of-the-art sampling-based diffusion models.

Published

2024-03-24

How to Cite

Bao, Q., Hui, Z., Zhu, R., Ren, P., Xie, X., & Yang, W. (2024). Improving Diffusion-Based Image Restoration with Error Contraction and Error Correction. Proceedings of the AAAI Conference on Artificial Intelligence, 38(2), 756–764. https://doi.org/10.1609/aaai.v38i2.27833

Issue

Section

AAAI Technical Track on Computer Vision I