Target-Driven Distillation: Consistency Distillation with Target Timestep Selection and Decoupled Guidance

Authors

  • Cunzheng Wang Zhejiang University Xiaohongshu
  • Ziyuan Guo Xiaohongshu
  • Yuxuan Duan Shanghai Jiao Tong University Xiaohongshu
  • Huaxia Li Xiaohongshu
  • Nemo Chen Xiaohongshu
  • Xu Tang Xiaohongshu
  • Yao Hu Xiaohongshu

DOI:

https://doi.org/10.1609/aaai.v39i7.32820

Abstract

Consistency distillation methods have demonstrated significant success in accelerating generative tasks of diffusion models. However, since previous consistency distillation methods use simple and straightforward strategies in selecting target timesteps, they usually struggle with blurs and detail losses in generated images. To address these limitations, we introduce Target-Driven Distillation (TDD), which (1) adopts a delicate selection strategy of target timesteps, increasing the training efficiency; (2) utilizes decoupled guidances during training, making TDD open to post-tuning on guidance scale during inference periods; (3) can be optionally equipped with non-equidistant sampling and x0 clipping, enabling a more flexible and accurate way for image sampling. Experiments verify that TDD achieves state-of-the-art performance in few-step generation, offering a better choice among consistency distillation models.

Downloads

Published

2025-04-11

How to Cite

Wang, C., Guo, Z., Duan, Y., Li, H., Chen, N., Tang, X., & Hu, Y. (2025). Target-Driven Distillation: Consistency Distillation with Target Timestep Selection and Decoupled Guidance. Proceedings of the AAAI Conference on Artificial Intelligence, 39(7), 7619–7627. https://doi.org/10.1609/aaai.v39i7.32820

Issue

Section

AAAI Technical Track on Computer Vision VI