Training-Free and Hardware-Friendly Acceleration for Diffusion Models via Similarity-based Token Pruning

Authors

  • Evelyn Zhang Shanghai Jiaotong University
  • Jiayi Tang China University of Mining and Technology
  • Xuefei Ning Tsinghua University
  • Linfeng Zhang Shanghai Jiaotong University

DOI:

https://doi.org/10.1609/aaai.v39i9.33071

Abstract

The excellent performance of diffusion models in image generation is always accompanied by overlarge computation costs, which have prevented the application of diffusion models in edge devices and interactive applications. Previous works mainly focus on using fewer sampling steps and compressing the denoising network of diffusion models, while this paper proposes to accelerate diffusion models by introducing SiTo, a similarity-based token pruning method that adaptive prunes the redundant tokens in the input data. SiTo is designed to maximize the similarity between model prediction with and without token pruning by using cheap and hardware-friendly operations, leading to significant acceleration ratios without performance drop, and even sometimes improvements in the generation quality. For instance, the zero-shot evaluation shows SiTo leads to 1.90x and 1.75x acceleration on COCO30K and ImageNet with 1.33 and 1.15 FID reduction at the same time. Besides, SiTo has no training requirements and does not require any calibration data, making it plug-and-play in real-world applications.

Downloads

Published

2025-04-11

How to Cite

Zhang, E., Tang, J., Ning, X., & Zhang, L. (2025). Training-Free and Hardware-Friendly Acceleration for Diffusion Models via Similarity-based Token Pruning. Proceedings of the AAAI Conference on Artificial Intelligence, 39(9), 9878-9886. https://doi.org/10.1609/aaai.v39i9.33071

Issue

Section

AAAI Technical Track on Computer Vision VIII