FastFLUX: Pruning FLUX with Block-wise Replacement and Sandwich Training

Authors

  • Fuhan Cai Shanghai Jiao Tong University
  • Yong Guo Max Planck Institute for Informatics
  • Jie Li South China University of Technology
  • Wenbo Li Chinese University of Hong Kong
  • Jian Chen South China University of Technology
  • Xiangzhong Fang Shanghai Jiao Tong University

DOI:

https://doi.org/10.1609/aaai.v40i4.37237

Abstract

Recent advancements in text-to-image (T2I) generation have led to the emergence of highly expressive models such as diffusion transformers (DiTs), exemplified by FLUX. However, their massive parameter sizes lead to slow inference, high memory usage, and poor deployability. Existing acceleration methods (e.g., single-step distillation and attention pruning) often suffer from significant performance degradation and incur substantial training costs. To address these limitations, we propose FastFLUX, an architecture-level pruning framework designed to enhance the inference efficiency of FLUX. At its core is the Block-wise Replacement with Linear Layers (BRLL) method, which replaces structurally complex residual branches in ResBlocks with lightweight linear layers while preserving the original shortcut connections for stability. Furthermore, we introduce Sandwich Training (ST), a localized fine-tuning strategy that leverages LoRA to supervise neighboring blocks, mitigating performance drops caused by structural replacement. Experiments show that our FastFLUX maintains high image quality under both qualitative and quantitative evaluations, while significantly improving inference speed, even with 20% of the hierarchy pruned.

Downloads

Published

2026-03-14

How to Cite

Cai, F., Guo, Y., Li, J., Li, W., Chen, J., & Fang, X. (2026). FastFLUX: Pruning FLUX with Block-wise Replacement and Sandwich Training. Proceedings of the AAAI Conference on Artificial Intelligence, 40(4), 2507-2515. https://doi.org/10.1609/aaai.v40i4.37237

Issue

Section

AAAI Technical Track on Computer Vision I