SkipCat: Rank-Maximized Low-Rank Compression of Large Language Models via Shared Projection and Block Skipping

Authors

  • Yu-Chen Lu National Yang Ming Chiao Tung University Macronix International Co., Ltd.
  • Sheng-Feng Yu National Yang Ming Chiao Tung University Macronix International Co., Ltd.
  • Hui-Hsien Weng National Yang Ming Chiao Tung University
  • Pei-Shuo Wang National Yang Ming Chiao Tung University
  • Yu-Fang Hu National Yang Ming Chiao Tung University Skymizer Taiwan Inc.
  • Liang Hung-Chun Skymizer Taiwan Inc.
  • Hung-Yueh Chiang The University of Texas at Austin
  • Kai-Chiang Wu National Yang Ming Chiao Tung University

DOI:

https://doi.org/10.1609/aaai.v40i29.39591

Abstract

Large language models (LLM) have achieved remarkable performance across a wide range of tasks. However, their substantial parameter sizes pose significant challenges for deployment on edge devices with limited computational and memory resources. Low-rank compression is a promising approach to address this issue, as it reduces both computational and memory costs, making LLM more suitable for resource-constrained environments. Nonetheless, naïve low-rank compression methods require a significant reduction in the retained rank to achieve meaningful memory and computation savings. For a low-rank model, the ranks need to be reduced by more than half to yield efficiency gains. Such aggressive truncation, however, typically results in substantial performance degradation. To address this trade-off, we propose SkipCat, a novel low-rank compression framework that enables the use of higher ranks while achieving the same compression rates. First, we introduce an intra-layer shared low-rank projection method, where multiple matrices that share the same input use a common projection. This reduces redundancy and improves compression efficiency. Second, we propose a block skipping technique that omits computations and memory transfers for selected sub-blocks within the low-rank decomposition. These two techniques jointly enable our compressed model to retain more effective ranks under the same compression budget. Experimental results show that, without any additional fine-tuning, our method outperforms previous low-rank compression approaches by 7% accuracy improvement on zero-shot tasks under the same compression rate. These results highlight the effectiveness of our rank-maximized compression strategy in preserving model performance under tight resource constraints.

Published

2026-03-14

How to Cite

Lu, Y.-C., Yu, S.-F., Weng, H.-H., Wang, P.-S., Hu, Y.-F., Hung-Chun, L., Chiang, H.-Y., & Wu, K.-C. (2026). SkipCat: Rank-Maximized Low-Rank Compression of Large Language Models via Shared Projection and Block Skipping. Proceedings of the AAAI Conference on Artificial Intelligence, 40(29), 24124-24132. https://doi.org/10.1609/aaai.v40i29.39591

Issue

Section

AAAI Technical Track on Machine Learning VI