Learnable Permutation for Structured Sparsity on Transformer Models
DOI:
https://doi.org/10.1609/aaai.v40i28.39501Abstract
Structured sparsity has emerged as a popular model pruning technique, widely adopted in various architectures, including CNNs, Transformer models, and especially large language models (LLMs) in recent years. A promising direction to further improve post-pruning performance is weight permutation, which reorders model weights into patterns more amenable to pruning. However, the exponential growth of the permutation search space with the scale of Transformer architectures forces most methods to rely on greedy or heuristic algorithms, limiting the effectiveness of reordering. In this work, we propose a novel end-to-end learnable permutation framework. Our method introduces a learnable permutation cost matrix to quantify the cost of swapping any two input channels of a given weight matrix, a differentiable bipartite matching solver to obtain the optimal binary permutation matrix given a cost matrix, and a sparsity optimization loss function to directly optimize the permutation operator. We extensively validate our approach on vision and language Transformers, demonstrating that our method achieves state-of-the-art permutation results for structured sparsity.Downloads
Published
2026-03-14
How to Cite
Li, Z., Liu, J., Li, G., Xu, Y., Liu, Z., Yin, X., … Barsoum, E. (2026). Learnable Permutation for Structured Sparsity on Transformer Models. Proceedings of the AAAI Conference on Artificial Intelligence, 40(28), 23319–23327. https://doi.org/10.1609/aaai.v40i28.39501
Issue
Section
AAAI Technical Track on Machine Learning V