KeepKV: Achieving Periodic Lossless KV Cache Compression for Efficient LLM Inference

Authors

  • Yuxuan Tian Peking University
  • Zihan Wang Peking University
  • Yebo Peng Peking University
  • Aomufei Yuan Peking University
  • Zhiming Wang Peking University
  • Bairen Yi ByteDance Inc.
  • Xin Liu ByteDance Inc.
  • Yong Cui Tsinghua University
  • Tong Yang Peking University

DOI:

https://doi.org/10.1609/aaai.v40i39.40611

Abstract

Efficient inference of large language models (LLMs) is hindered by an ever-growing key-value (KV) cache, making KV cache compression a critical research direction. Traditional methods selectively evict less important KV cache entries, which leads to information loss and hallucinations. Recently, merging-based strategies have been explored to retain more information by merging KV pairs that would be discarded; however, these existing approaches inevitably introduce inconsistencies in attention distributions before and after merging, causing degraded generation quality. To overcome this challenge, we propose KeepKV , a novel adaptive KV cache merging method designed to preserve performance under strict memory constraints, achieving single-step lossless compression and providing error bounds for multi-step compression. KeepKV introduces the Electoral Votes mechanism that records merging history and adaptively adjusts attention scores. Moreover, it further leverages a novel Zero Inference-Perturbation Merging method, compensating for attention loss resulting from cache merging. Extensive experiments on various benchmarks and LLM architectures demonstrate that KeepKV substantially reduces memory usage while successfully retaining essential context information, achieving over 2 times inference throughput improvement and maintaining superior generation quality even with only 10% KV cache budgets.

Published

2026-03-14

How to Cite

Tian, Y., Wang, Z., Peng, Y., Yuan, A., Wang, Z., Yi, B., … Yang, T. (2026). KeepKV: Achieving Periodic Lossless KV Cache Compression for Efficient LLM Inference. Proceedings of the AAAI Conference on Artificial Intelligence, 40(39), 33259–33267. https://doi.org/10.1609/aaai.v40i39.40611

Issue

Section

AAAI Technical Track on Natural Language Processing IV