F³-Pruning: A Training-Free and Generalized Pruning Strategy towards Faster and Finer Text-to-Video Synthesis

Authors

  • Sitong Su University of Electronic Science and Technology of China
  • Jianzhi Liu University of Electronic Science and Technology of China
  • Lianli Gao University of Electronic Science and Technology of China
  • Jingkuan Song University of Electronic Science and Technology of China

DOI:

https://doi.org/10.1609/aaai.v38i5.28300

Keywords:

CV: Computational Photography, Image & Video Synthesis, ML: Learning on the Edge & Model Compression

Abstract

Recently Text-to-Video (T2V) synthesis has undergone a breakthrough by training transformers or diffusion models on large-scale datasets. Nevertheless, inferring such large models incurs huge costs. Previous inference acceleration works either require costly retraining or are model-specific. To address this issue, instead of retraining we explore the inference process of two mainstream T2V models using transformers and diffusion models. The exploration reveals the redundancy in temporal attention modules of both models, which are commonly utilized to establish temporal relations among frames. Consequently, we propose a training-free and generalized pruning strategy called F3-Pruning to prune redundant temporal attention weights. Specifically, when aggregate temporal attention values are ranked below a certain ratio, corresponding weights will be pruned. Extensive experiments on three datasets using a classic transformer-based model CogVideo and a typical diffusion-based model Tune-A-Video verify the effectiveness of F3-Pruning in inference acceleration, quality assurance and broad applicability.

Published

2024-03-24

How to Cite

Su, S., Liu, J., Gao, L., & Song, J. (2024). F³-Pruning: A Training-Free and Generalized Pruning Strategy towards Faster and Finer Text-to-Video Synthesis. Proceedings of the AAAI Conference on Artificial Intelligence, 38(5), 4961–4969. https://doi.org/10.1609/aaai.v38i5.28300

Issue

Section

AAAI Technical Track on Computer Vision IV