OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Models

Authors

  • Changhun Lee POSTECH
  • Jungyu Jin POSTECH
  • Taesu Kim SqueezeBits Inc.
  • Hyungjun Kim SqueezeBits Inc.
  • Eunhyeok Park POSTECH

DOI:

https://doi.org/10.1609/aaai.v38i12.29237

Keywords:

ML: Learning on the Edge & Model Compression, NLP: (Large) Language Models, NLP: Learning & Optimization for NLP

Abstract

Large language models (LLMs) with hundreds of billions of parameters require powerful server-grade GPUs for inference, limiting their practical deployment. To address this challenge, we introduce the outlier-aware weight quantization (OWQ) method, which aims to minimize LLM's footprint through low-precision representation. OWQ prioritizes a small subset of structured weights sensitive to quantization, storing them in high-precision, while applying highly tuned quantization to the remaining dense weights. This sensitivity-aware mixed-precision scheme reduces the quantization error notably, and extensive experiments demonstrate that 3.1-bit models using OWQ perform comparably to 4-bit models optimized by OPTQ. Furthermore, OWQ incorporates a parameter-efficient fine-tuning for task-specific adaptation, called weak column tuning (WCT), enabling accurate task-specific LLM adaptation with minimal memory overhead in the optimized format. OWQ represents a notable advancement in the flexibility, efficiency, and practicality of LLM optimization literature. The source code is available at https://github.com/xvyaward/owq.

Published

2024-03-24

How to Cite

Lee, C., Jin, J., Kim, T., Kim, H., & Park, E. (2024). OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 38(12), 13355-13364. https://doi.org/10.1609/aaai.v38i12.29237

Issue

Section

AAAI Technical Track on Machine Learning III