ProCache: Constraint-Aware Feature Caching with Selective Computation for Diffusion Transformer Acceleration

Authors

  • Fanpu Cao South China University of Technology
  • Yaofo Chen South China University of Technology
  • Zeng You South China University of Technology
  • Wei Luo South China Agricultural University Pazhou Laboratory

DOI:

https://doi.org/10.1609/aaai.v40i24.39069

Abstract

Diffusion Transformers (DiTs) have achieved state-of-the-art performance in generative modeling, yet their high computational cost hinders real-time deployment. While feature caching offers a promising training-free acceleration solution by exploiting temporal redundancy, existing methods suffer from two key limitations: (1) uniform caching intervals fail to align with the non-uniform temporal dynamics of DiT, and (2) naive feature reuse with excessively large caching intervals can lead to severe error accumulation. In this work, we analyze the evolution of DiT features during denoising and reveal that both feature changes and error propagation are highly time- and depth-varying. Motivated by this, we propose ProCache, a training-free dynamic feature caching framework that addresses these issues via two core components: (i) a constraint-aware caching pattern search module that generates non-uniform activation schedules through offline constrained sampling, tailored to the model’s temporal characteristics; and (ii) a selective computation module that selectively compute within deep blocks and high-importance tokens for cached segments to mitigate error accumulation with minimal overhead. Extensive experiments on PixArt-alpha and DiT demonstrate that ProCache achieves up to 1.96 times and 2.90 times acceleration with negligible quality degradation, significantly outperforming prior caching-based methods.

Downloads

Published

2026-03-14

How to Cite

Cao, F., Chen, Y., You, Z., & Luo, W. (2026). ProCache: Constraint-Aware Feature Caching with Selective Computation for Diffusion Transformer Acceleration. Proceedings of the AAAI Conference on Artificial Intelligence, 40(24), 19862–19870. https://doi.org/10.1609/aaai.v40i24.39069

Issue

Section

AAAI Technical Track on Machine Learning I