Self-Indexing KVCache: Predicting Sparse Attention from Compressed Keys

Authors

  • Xu Yang Hunan University
  • Jiapeng Zhang Hunan University
  • Dongyang Zhao Hunan University
  • Guo Chen Hunan University
  • Zhuo Tang Hunan University Shenzhen Research Institute, Hunan University

DOI:

https://doi.org/10.1609/aaai.v40i33.39988

Abstract

The KV cache in self-attention has emerged as a major bottleneck in long-context and large-batch inference for LLMs. Existing approaches often treat sparsity prediction and compression as separate modules—relying on auxiliary index structures to select relevant tokens, and on complex quantization schemes to reduce memory usage. This fragmented design introduces redundant overhead and limits scalability. In this paper, we propose a novel paradigm: treating the compressed key representation not merely as storage, but as a self-indexing structure that directly enables efficient sparse attention. By designing a sign-based 1-bit vector quantization (VQ) scheme, our method unifies compression and retrieval in a single, hardware-friendly format. This approach eliminates the need for external indices or learning-based predictors, offering a lightweight yet robust solution for memory-constrained inference. All components are designed to be hardware-efficient and easy to implement. By implementing custom CUDA kernels, our method integrates seamlessly with FlashAttention, minimizing additional runtime and memory overhead. Experimental results demonstrate that our approach delivers both effectiveness and efficiency.

Downloads

Published

2026-03-14

How to Cite

Yang, X., Zhang, J., Zhao, D., Chen, G., & Tang, Z. (2026). Self-Indexing KVCache: Predicting Sparse Attention from Compressed Keys. Proceedings of the AAAI Conference on Artificial Intelligence, 40(33), 27675–27683. https://doi.org/10.1609/aaai.v40i33.39988

Issue

Section

AAAI Technical Track on Machine Learning X