Dong, Yanhao, et al. “Accelerating LLM Inference Throughput via Asynchronous KV Cache Prefetching”. Proceedings of the AAAI Conference on Artificial Intelligence, vol. 40, no. 25, Mar. 2026, pp. 20844-51, doi:10.1609/aaai.v40i25.39224.