MIDMs: Matching Interleaved Diffusion Models for Exemplar-Based Image Translation
DOI:
https://doi.org/10.1609/aaai.v37i2.25313Keywords:
CV: Computational Photography, Image & Video Synthesis, CV: ApplicationsAbstract
We present a novel method for exemplar-based image translation, called matching interleaved diffusion models (MIDMs). Most existing methods for this task were formulated as GAN-based matching-then-generation framework. However, in this framework, matching errors induced by the difficulty of semantic matching across cross-domain, e.g., sketch and photo, can be easily propagated to the generation step, which in turn leads to the degenerated results. Motivated by the recent success of diffusion models, overcoming the shortcomings of GANs, we incorporate the diffusion models to overcome these limitations. Specifically, we formulate a diffusion-based matching-and-generation framework that interleaves cross-domain matching and diffusion steps in the latent space by iteratively feeding the intermediate warp into the noising process and denoising it to generate a translated image. In addition, to improve the reliability of diffusion process, we design confidence-aware process using cycle-consistency to consider only confident regions during translation. Experimental results show that our MIDMs generate more plausible images than state-of-the-art methods.Downloads
Published
2023-06-26
How to Cite
Seo, J., Lee, G., Cho, S., Lee, J., & Kim, S. (2023). MIDMs: Matching Interleaved Diffusion Models for Exemplar-Based Image Translation. Proceedings of the AAAI Conference on Artificial Intelligence, 37(2), 2191-2199. https://doi.org/10.1609/aaai.v37i2.25313
Issue
Section
AAAI Technical Track on Computer Vision II