DiffBench Meets DiffAgent: End-to-End LLM-Driven Diffusion Acceleration Code Generation
DOI:
https://doi.org/10.1609/aaai.v40i27.39395Abstract
Diffusion models have achieved remarkable success in image and video generation. However, their inherently multiple step inference process imposes substantial computational overhead, hindering real-world deployment. Accelerating diffusion models is therefore essential, yet determining how to combine multiple model acceleration techniques remains a significant challenge. To address this issue, we introduce a framework driven by large language models (LLMs) for automated acceleration code generation and evaluation. First, we present DiffBench, a comprehensive benchmark that implements a three stage automated evaluation pipeline across diverse diffusion architectures, optimization combinations and deployment scenarios. Second, we propose DiffAgent, an agent that generates optimal acceleration strategies and codes for arbitrary diffusion models. DiffAgent employs a closed-loop workflow in which a planning component and a debugging component iteratively refine the output of a code generation component, while a genetic algorithm extracts performance feedback from the execution environment to guide subsequent code refinements. We provide a detailed explanation of the DiffBench construction and the design principles underlying DiffAgent. Extensive experiments show that DiffBench offers a thorough evaluation of generated codes and that DiffAgent significantly outperforms existing LLMs in producing effective diffusion acceleration strategies.Downloads
Published
2026-03-14
How to Cite
Jiao, J., Zhu, H., Yang, P., Wang, J., Liu, J., Liu, Z., Li, D., Fang, Y., Yong, J.-H., Wang, B., & Barsoum, E. (2026). DiffBench Meets DiffAgent: End-to-End LLM-Driven Diffusion Acceleration Code Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 40(27), 22372-22380. https://doi.org/10.1609/aaai.v40i27.39395
Issue
Section
AAAI Technical Track on Machine Learning IV