High-Fidelity Diffusion-Based Image Editing
DOI:
https://doi.org/10.1609/aaai.v38i3.27991Keywords:
CV: Computational Photography, Image & Video Synthesis, CV: ApplicationsAbstract
Diffusion models have attained remarkable success in the domains of image generation and editing. It is widely recognized that employing larger inversion and denoising steps in diffusion model leads to improved image reconstruction quality. However, the editing performance of diffusion models tends to be no more satisfactory even with increasing denoising steps. The deficiency in editing could be attributed to the conditional Markovian property of the editing process, where errors accumulate throughout denoising steps. To tackle this challenge, we first propose an innovative framework where a rectifier module is incorporated to modulate diffusion model weights with residual features from the original images, thereby providing compensatory information to bridge the fidelity gap. Furthermore, we introduce a novel learning paradigm aimed at minimizing error propagation during the editing process, which trains the editing procedure in a manner similar to denoising score-matching. Extensive experiments demonstrate that our proposed framework and training strategy achieve high-fidelity reconstruction and editing results across various levels of denoising steps, meanwhile exhibits exceptional performance in terms of both quantitative metric and qualitative assessments. Lastly, we explore our model's generalization though several applications like image-to-image translation and out-of-domain image editing.Downloads
Published
2024-03-24
How to Cite
Hou, C., Wei, G. ., & Chen, Z. (2024). High-Fidelity Diffusion-Based Image Editing. Proceedings of the AAAI Conference on Artificial Intelligence, 38(3), 2184-2192. https://doi.org/10.1609/aaai.v38i3.27991
Issue
Section
AAAI Technical Track on Computer Vision II