SoLA: Leveraging Soft Activation Sparsity and Low-Rank Decomposition for Large Language Model Compression
DOI:
https://doi.org/10.1609/aaai.v39i16.33923Abstract
Large language models (LLMs) have demonstrated impressive capabilities across various tasks, but the billion-scale parameters pose deployment challenges. Although existing methods attempt to reduce the scale of LLMs, they require either special hardware support or expensive post-training to maintain model quality. To facilitate efficient and affordable model slimming, we propose a novel training-free compression method for LLMs, named “SoLA”, which leverages Soft activation sparsity and Low-rAnk decomposition. SoLA can identify and retain a minority of components significantly contributing to inference, while compressing the majority through low-rank decomposition, based on our analysis of the activation pattern in the feed-forward network (FFN) of modern LLMs. To alleviate the decomposition loss, SoLA is equipped with an adaptive component-wise low-rank allocation strategy to assign appropriate truncation positions for different weight matrices. We conduct extensive experiments on LLaMA-2-7B/13B/70B and Mistral-7B models across a variety of benchmarks. SoLA exhibits remarkable improvement in both language modeling and downstream task accuracy without post-training. For example, with a 30% compression rate on the LLaMA-2-70B model, SoLA surpasses the state-of-the-art method by reducing perplexity from 6.95 to 4.44 and enhancing downstream task accuracy by 10%.Published
2025-04-11
How to Cite
Huang, X., Huang, Y.-L., & Wen, Z. (2025). SoLA: Leveraging Soft Activation Sparsity and Low-Rank Decomposition for Large Language Model Compression. Proceedings of the AAAI Conference on Artificial Intelligence, 39(16), 17494–17502. https://doi.org/10.1609/aaai.v39i16.33923
Issue
Section
AAAI Technical Track on Machine Learning II