MPQ-DM: Mixed Precision Quantization for Extremely Low Bit Diffusion Models

Authors

  • Weilun Feng Institute of Computing Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences
  • Haotong Qin ETHZ - ETH Zurich
  • Chuanguang Yang Institute of Computing Technology, Chinese Academy of Sciences
  • Zhulin An Institute of Computing Technology, Chinese Academy of Sciences
  • Libo Huang Institute of Computing Technology, Chinese Academy of Sciences
  • Boyu Diao Institute of Computing Technology, Chinese Academy of Sciences
  • Fei Wang Institute of Computing Technology, Chinese Academy of Sciences
  • Renshuai Tao Beijing Jiaotong University
  • Yongjun Xu Institute of Computing Technology, Chinese Academy of Sciences
  • Michele Magno ETHZ - ETH Zurich

DOI:

https://doi.org/10.1609/aaai.v39i16.33823

Abstract

Diffusion models have received wide attention in generation tasks. However, the expensive computation cost prevents the application of diffusion models in resource-constrained scenarios. Quantization emerges as a practical solution that significantly saves storage and computation by reducing the bit-width of parameters. However, the existing quantization methods for diffusion models still cause severe degradation in performance, especially under extremely low bit-widths (2-4 bit). The primary decrease in performance comes from the significant discretization of activation values at low bit quantization. Too few activation candidates are unfriendly for outlier significant weight channel quantization, and the discretized features prevent stable learning over different time steps of the diffusion model. This paper presents MPQ-DM, a Mixed-Precision Quantization method for Diffusion Models. The proposed MPQ-DM mainly relies on two techniques: (1) To mitigate the quantization error caused by outlier severe weight channels, we propose an Outlier-Driven Mixed Quantization (OMQ) technique that uses Kurtosis to quantify outlier salient channels and apply optimized intra-layer mixed-precision bit-width allocation to recover accuracy performance within target efficiency. (2) To robustly learn representations crossing time steps, we construct a Time-Smoothed Relation Distillation (TRD) scheme between the quantized diffusion model and its full-precision counterpart, transferring discrete and continuous latent to a unified relation space to reduce the representation inconsistency. Comprehensive experiments demonstrate that MPQ-DM achieves significant accuracy gains under extremely low bit-widths compared with SOTA quantization methods. MPQ-DM achieves a 58% FID decrease under W2A4 setting compared with baseline, while all other methods even collapse.

Published

2025-04-11

How to Cite

Feng, W., Qin, H., Yang, C., An, Z., Huang, L., Diao, B., Wang, F., Tao, R., Xu, Y., & Magno, M. (2025). MPQ-DM: Mixed Precision Quantization for Extremely Low Bit Diffusion Models. Proceedings of the AAAI Conference on Artificial Intelligence, 39(16), 16595-16603. https://doi.org/10.1609/aaai.v39i16.33823

Issue

Section

AAAI Technical Track on Machine Learning II