LampQ: Towards Accurate Layer-wise Mixed Precision Quantization for Vision Transformers

Authors

  • Minjun Kim Seoul National University
  • Jaeri Lee Seoul National University
  • Jongjin Kim Seoul National University
  • Jeongin Yun Seoul National University
  • Yongmo Kwon Seoul National University
  • U Kang Seoul National University

DOI:

https://doi.org/10.1609/aaai.v40i7.37489

Abstract

How can we accurately quantize a pre-trained Vision Transformer model? Quantization algorithms compress Vision Transformers (ViTs) into low-bit formats, reducing memory and computation demands with minimal accuracy degradation. However, existing methods rely on uniform precision, ignoring the diverse sensitivity of ViT components to quantization. Metric-based Mixed Precision Quantization (MPQ) is a promising alternative, but previous MPQ methods for ViTs suffer from three major limitations: 1) coarse granularity, 2) mismatch in metric scale across component types, and 3) quantization-unaware bit allocation. In this paper, we propose LampQ (Layer-wise Mixed Precision Quantization for Vision Transformers), an accurate metric-based MPQ method for ViTs to overcome these limitations. LampQ performs layer-wise quantization to achieve both fine-grained control and efficient acceleration, incorporating a type-aware Fisher-based metric to measure sensitivity. Then, LampQ assigns bit-widths optimally through integer linear programming and further updates them iteratively. Extensive experiments show that LampQ provides the state-of-the-art performance in quantizing ViTs pre-trained on various tasks such as image classification, object detection, and zero-shot quantization.

Downloads

Published

2026-03-14

How to Cite

Kim, M., Lee, J., Kim, J., Yun, J., Kwon, Y., & Kang, U. (2026). LampQ: Towards Accurate Layer-wise Mixed Precision Quantization for Vision Transformers. Proceedings of the AAAI Conference on Artificial Intelligence, 40(7), 5691–5699. https://doi.org/10.1609/aaai.v40i7.37489

Issue

Section

AAAI Technical Track on Computer Vision IV