TransMamba: A Sequence-Level Hybrid Transformer-Mamba Language Model
DOI:
https://doi.org/10.1609/aaai.v40i38.40451Abstract
Transformers are the cornerstone of modern large language models, but their quadratic computational complexity limits efficiency in long-sequence processing. Recent advancements in Mamba, a state space model (SSM) with linear complexity, offer promising efficiency gains but suffer from unstable contextual learning and multitask generalization. Some works conduct layer-level hybrid structures that combine Transformer and Mamba layers, aiming to make full use of both advantages. This paper proposes TransMamba, a novel sequence-level hybrid framework that unifies Transformer and Mamba through shared parameter matrices (QKV and CBx), and thus could dynamically switch between attention and SSM mechanisms at different token lengths and layers. We design the Memory Converter to bridge Transformer and Mamba by converting attention outputs into SSM-compatible states, ensuring seamless information flow at TransPoints where the transformation happens. The TransPoint scheduling is also thoroughly explored for balancing effectiveness and efficiency. We conducted extensive experiments demonstrating that TransMamba achieves superior training efficiency and performance compared to single and hybrid baselines, and validated the deeper consistency between Transformer and Mamba paradigms at sequence level, offering a scalable solution for next-generation language modeling.Downloads
Published
2026-03-14
How to Cite
Li, Y., Xie, R., Yang, Z., Sun, X., Li, S., Han, W., … Cheng, Y. (2026). TransMamba: A Sequence-Level Hybrid Transformer-Mamba Language Model. Proceedings of the AAAI Conference on Artificial Intelligence, 40(38), 31823–31833. https://doi.org/10.1609/aaai.v40i38.40451
Issue
Section
AAAI Technical Track on Natural Language Processing III