LatentLLM: Activation-Aware Transform to Multi-Head Latent Attention

Authors

  • Toshiaki Koike-Akino Mitsubishi Electric Research Labs
  • Xiangyu Chen Sony Electronics Inc.
  • Jing Liu Mitsubishi Electric Research Labs
  • Ye Wang Mitsubishi Electric Research Labs
  • Pu (Perry) Wang Mitsubishi Electric Research Labs
  • Matthew Brand Mitsubishi Electric Research Labs

DOI:

https://doi.org/10.1609/aaai.v40i27.39425

Abstract

Modern foundation models such as large language models (LLMs) require a massive amount of computational and memory resources. We propose a new framework to convert such LLMs into a reduced-dimension latent structure. Our method extends a local activation-aware tensor decomposition to a global attention-aware joint tensor decomposition. Our framework can significantly improve the model accuracy over the existing model compression methods when reducing the latent dimension to realize computationally/memory-efficient LLMs. We show the benefit on several benchmark including multi-modal reasoning tasks.

Published

2026-03-14

How to Cite

Koike-Akino, T., Chen, X., Liu, J., Wang, Y., Wang, P. (Perry), & Brand, M. (2026). LatentLLM: Activation-Aware Transform to Multi-Head Latent Attention. Proceedings of the AAAI Conference on Artificial Intelligence, 40(27), 22644–22652. https://doi.org/10.1609/aaai.v40i27.39425

Issue

Section

AAAI Technical Track on Machine Learning IV