Exploring Post-training Quantization in LLMs from Comprehensive Study to Low Rank Compensation
DOI:
https://doi.org/10.1609/aaai.v38i17.29908Keywords:
NLP: (Large) Language Models, NLP: Generation, NLP: Learning & Optimization for NLPAbstract
Post-training quantization (PTQ) has emerged as a promising technique for mitigating memory consumption and computational costs in large language models (LLMs). However, a systematic examination of various quantization schemes, model families, and quantization bit precision has been absent from the literature. In this paper, we conduct a comprehensive analysis of these factors by investigating the effects of PTQ on weight-only, activation-only, and weight-and-activation quantization using diverse methods such as round-to-nearest (RTN), GPTQ, ZeroQuant, and their variants. We apply these methods to two distinct model families with parameters ranging from 125M to 176B. Our contributions include: (1) a sensitivity analysis revealing that activation quantization is generally more susceptible to weight quantization, with smaller models often outperforming larger models in terms of activation quantization; (2) an evaluation and comparison of existing PTQ methods to optimize model size reduction while minimizing the impact on accuracy, revealing that none of the current methods can achieve the original model quality for quantization with either INT4-weight or INT4-weight-and-INT8-activation; (3) based on these insights, we propose an optimized method called Low-Rank Compensation (LoRC), which employs low-rank matrices to enhance model quality recovery with a minimal increase in model size.Downloads
Published
2024-03-24
How to Cite
Yao, Z., Wu, X., Li, C., Youn, S., & He, Y. (2024). Exploring Post-training Quantization in LLMs from Comprehensive Study to Low Rank Compensation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(17), 19377-19385. https://doi.org/10.1609/aaai.v38i17.29908
Issue
Section
AAAI Technical Track on Natural Language Processing II