Numerical Pruning for Efficient Autoregressive Models

Authors

  • Xuan Shen Northeastern University
  • Zhao Song University of California, Berkeley
  • Yufa Zhou University of Pennsylvania
  • Bo Chen Middle Tennessee State University
  • Jing Liu Monash University
  • Ruiyi Zhang Adobe Research
  • Ryan A. Rossi Adobe Research
  • Hao Tan Adobe Research
  • Tong Yu Adobe Research
  • Xiang Chen Adobe Research
  • Yufan Zhou Adobe Research
  • Tong Sun Adobe Research
  • Pu Zhao Northeastern University
  • Yanzhi Wang Northeastern University
  • Jiuxiang Gu Adobe Research

DOI:

https://doi.org/10.1609/aaai.v39i19.34249

Abstract

Transformers have emerged as the leading architecture in deep learning, proving to be versatile and highly effective across diverse domains beyond language and image processing. However, their impressive performance often incurs high computational costs due to their substantial model size. This paper focuses on compressing decoder-only transformer-based autoregressive models through structural weight pruning to improve the model efficiency while preserving performance for both language and image generation tasks. Specifically, we propose a training-free pruning method that calculates a numerical score with Newton's method for the Attention and MLP modules, respectively. Besides, we further propose another compensation algorithm to recover the pruned model for better performance. To verify the effectiveness of our method, we provide both theoretical support and extensive experiments. Our experiments show that our method achieves state-of-the-art performance with reduced memory usage and faster generation speeds on GPUs.

Downloads

Published

2025-04-11

How to Cite

Shen, X., Song, Z., Zhou, Y., Chen, B., Liu, J., Zhang, R., … Gu, J. (2025). Numerical Pruning for Efficient Autoregressive Models. Proceedings of the AAAI Conference on Artificial Intelligence, 39(19), 20418–20426. https://doi.org/10.1609/aaai.v39i19.34249

Issue

Section

AAAI Technical Track on Machine Learning V