OAC: Output-adaptive Calibration for Accurate Post-training Quantization

Authors

  • Ali Edalati Huawei Noah's Ark Lab
  • Alireza Ghaffari Huawei Noah's Ark Lab Department of Mathematics and Statistics, McGill University
  • Mahsa Ghazvini Nejad Huawei Noah's Ark Lab
  • Lu Hou Huawei Noah's Ark Lab
  • Boxing Chen Huawei Noah's Ark Lab
  • Masoud Asgharian Department of Mathematics and Statistics, McGill University
  • Vahid Partovi Nia Huawei Noah's Ark Lab

DOI:

https://doi.org/10.1609/aaai.v39i16.33807

Abstract

Deployment of Large Language Models (LLMs) has major computational costs, due to their rapidly expanding size. Compression of LLMs reduces the memory footprint, latency, and energy required for their inference. Post-training Quantization (PTQ) techniques have been developed to compress LLMs while avoiding expensive re-training. Most PTQ approaches formulate the quantization error based on a layer-wise Euclidean loss, ignoring the model output. Then, each layer is calibrated using its layer-wise Hessian to update the weights towards minimizing the quantization error. The Hessian is also used for detecting the most salient weights to quantization. Such PTQ approaches are prone to accuracy drop in low-precision quantization. We propose Output-adaptive Calibration (OAC) to incorporate the model output in the calibration process. We formulate the quantization error based on the distortion of the output cross-entropy loss. OAC approximates the output-adaptive Hessian for each layer under reasonable assumptions to reduce the computational complexity. The output-adaptive Hessians are used to update the weight matrices and detect the salient weights towards maintaining the model output. Our proposed method outperforms the state-of-the-art baselines such as SpQR and BiLLM, especially, at extreme low-precision (2-bit and binary) quantization.

Downloads

Published

2025-04-11

How to Cite

Edalati, A., Ghaffari, A., Nejad, M. G., Hou, L., Chen, B., Asgharian, M., & Partovi Nia, V. (2025). OAC: Output-adaptive Calibration for Accurate Post-training Quantization. Proceedings of the AAAI Conference on Artificial Intelligence, 39(16), 16453–16461. https://doi.org/10.1609/aaai.v39i16.33807

Issue

Section

AAAI Technical Track on Machine Learning II