Share Your Attention: Transformer Weight Sharing via Matrix-based Dictionary Learning

Authors

  • Magauiya Zhussip MWS AI
  • Dmitriy Shopkhoev MWS AI ITMO University
  • Ammar Ali MWS AI
  • Stamatios Lefkimmiatis MWS AI

DOI:

https://doi.org/10.1609/aaai.v40i34.40165

Abstract

Large language models (LLMs) have revolutionized AI applications, yet their high computational and memory demands hinder their widespread deployment. Existing compression techniques focus on intra-block optimizations (e.g., low-rank approximation or attention head pruning), while the repetitive layered structure of transformers implies significant inter-block redundancy - a dimension largely unexplored beyond key-value (KV) caching. Inspired by dictionary learning in convolutional networks, we propose a framework for structured weight sharing across transformer layers. Our approach decomposes attention projection matrices (Q, K, V, O) into shared dictionary atoms, reducing the attention module's parameters by 66.7% (e.g., 226.5M -> 75M in a 700M-parameter model) while achieving on-par performance. Unlike complex methods requiring distillation or architectural changes, MASA (Matrix Atom Sharing in Attention) operates as a drop-in replacement - trained with standard optimizers - and represents each layer's weights as linear combinations of shared matrix atoms. Experiments across scales (100M-700M parameters) show that MASA achieves better benchmark accuracy and perplexity than grouped-query attention (GQA), low-rank baselines and recently proposed Repeat-all-over/Sequential sharing at comparable parameter budgets. Ablation studies confirm robustness to the dictionary size and the efficacy of shared representations in capturing cross-layer statistical regularities. Extending to Vision Transformers (ViT), MASA matches performance metrics on image classification tasks with 66.7% fewer attention parameters. By combining dictionary learning strategies with transformer efficiency, MASA offers a scalable blueprint for parameter-efficient models without sacrificing performance. Finally, we investigate the possibility of employing MASA on large pretrained models to reduce their number of parameters without experiencing any significant drop in their performance.

Published

2026-03-14

How to Cite

Zhussip, M., Shopkhoev, D., Ali, A., & Lefkimmiatis, S. (2026). Share Your Attention: Transformer Weight Sharing via Matrix-based Dictionary Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 40(34), 29260–29268. https://doi.org/10.1609/aaai.v40i34.40165

Issue

Section

AAAI Technical Track on Machine Learning XI