[1]
Tian, Y. et al. 2026. KeepKV: Achieving Periodic Lossless KV Cache Compression for Efficient LLM Inference. Proceedings of the AAAI Conference on Artificial Intelligence. 40, 39 (Mar. 2026), 33259–33267. DOI:https://doi.org/10.1609/aaai.v40i39.40611.