Agile-Quant: Activation-Guided Quantization for Faster Inference of LLMs on the Edge

Authors

  • Xuan Shen Northeastern University
  • Peiyan Dong Northeastern University
  • Lei Lu Northeastern University
  • Zhenglun Kong Northeastern University
  • Zhengang Li Northeastern University
  • Ming Lin Oracle
  • Chao Wu Northeastern University
  • Yanzhi Wang Northeastern University

DOI:

https://doi.org/10.1609/aaai.v38i17.29860

Keywords:

NLP: (Large) Language Models, NLP: Generation

Abstract

Large Language Models (LLMs) stand out for their impressive performance in intricate language modeling tasks. However, their demanding computational and memory needs pose obstacles for broad use on edge devices. Quantization is then introduced to boost LLMs' on-device efficiency. Recent works show that 8-bit or lower weight quantization is feasible with minimal impact on end-to-end task performance, while the activation is still not quantized. On the other hand, mainstream commodity edge devices still struggle to execute these sub-8-bit quantized networks effectively. In this paper, we propose Agile-Quant, an Activation-Guided quantization framework for faster Inference of popular Large Language Models (LLMs) on the Edge. Considering the hardware profiling and activation analysis, we first introduce a basic activation quantization strategy to balance the trade-off of task performance and real inference speed. Then we leverage the activation-aware token pruning technique to reduce the outliers and the adverse impact on attentivity. Ultimately, we utilize the SIMD-based 4-bit multiplier and our efficient TRIP matrix multiplication to implement the accelerator for LLMs on the edge. We apply our framework on different scales of LLMs including LLaMA, OPT, and BLOOM with 4-bit or 8-bit for the activation and 4-bit for the weight quantization. Experiments show that Agile-Quant achieves simultaneous quantization of model weights and activations while maintaining task performance comparable to existing weight-only quantization methods. Moreover, in the 8- and 4-bit scenario, Agile-Quant achieves an on-device speedup of up to 2.55x compared to its FP16 counterparts across multiple edge devices, marking a pioneering advancement in this domain.

Published

2024-03-24

How to Cite

Shen, X., Dong, P., Lu, L., Kong, Z., Li, Z., Lin, M., Wu, C., & Wang, Y. (2024). Agile-Quant: Activation-Guided Quantization for Faster Inference of LLMs on the Edge. Proceedings of the AAAI Conference on Artificial Intelligence, 38(17), 18944-18951. https://doi.org/10.1609/aaai.v38i17.29860

Issue

Section

AAAI Technical Track on Natural Language Processing II