Tian, Yuxuan, Zihan Wang, Yebo Peng, Aomufei Yuan, Zhiming Wang, Bairen Yi, Xin Liu, Yong Cui, and Tong Yang. “KeepKV: Achieving Periodic Lossless KV Cache Compression for Efficient LLM Inference”. Proceedings of the AAAI Conference on Artificial Intelligence 40, no. 39 (March 14, 2026): 33259–33267. Accessed May 14, 2026. https://ojs.aaai.org/index.php/AAAI/article/view/40611.