Fluctuation-Based Adaptive Structured Pruning for Large Language Models

Authors

  • Yongqi An Institute of Automation, Chinese Academy of Sciences University of Chinese Academy of Sciences
  • Xu Zhao Institute of Automation, Chinese Academy of Sciences Objecteye Inc.
  • Tao Yu Institute of Automation, Chinese Academy of Sciences University of Chinese Academy of Sciences
  • Ming Tang Institute of Automation, Chinese Academy of Sciences University of Chinese Academy of Sciences
  • Jinqiao Wang Institute of Automation, Chinese Academy of Sciences University of Chinese Academy of Sciences Objecteye Inc. Wuhan AI Research

DOI:

https://doi.org/10.1609/aaai.v38i10.28960

Keywords:

ML: Learning on the Edge & Model Compression, NLP: (Large) Language Models

Abstract

Network Pruning is a promising way to address the huge computing resource demands of the deployment and inference of Large Language Models (LLMs). Retraining-free is important for LLMs' pruning methods. However, almost all of the existing retraining-free pruning approaches for LLMs focus on unstructured pruning, which requires specific hardware support for acceleration. In this paper, we propose a novel retraining-free structured pruning framework for LLMs, named FLAP (FLuctuation-based Adaptive Structured Pruning). It is hardware-friendly by effectively reducing storage and enhancing inference speed. For effective structured pruning of LLMs, we highlight three critical elements that demand the utmost attention: formulating structured importance metrics, adaptively searching the global compressed model, and implementing compensation mechanisms to mitigate performance loss. First, FLAP determines whether the output feature map is easily recoverable when a column of weight is removed, based on the fluctuation pruning metric. Then it standardizes the importance scores to adaptively determine the global compressed model structure. At last, FLAP adds additional bias terms to recover the output feature maps using the baseline values. We thoroughly evaluate our approach on a variety of language benchmarks. Without any retraining, our method significantly outperforms the state-of-the-art methods, including LLM-Pruner and the extension of Wanda in structured pruning. The code is released at https://github.com/CASIA-IVA-Lab/FLAP.

Published

2024-03-24

How to Cite

An, Y., Zhao, X., Yu, T., Tang, M., & Wang, J. (2024). Fluctuation-Based Adaptive Structured Pruning for Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 38(10), 10865-10873. https://doi.org/10.1609/aaai.v38i10.28960

Issue

Section

AAAI Technical Track on Machine Learning I