[1]
Dong, Y. et al. 2026. Accelerating LLM Inference Throughput via Asynchronous KV Cache Prefetching. Proceedings of the AAAI Conference on Artificial Intelligence. 40, 25 (Mar. 2026), 20844–20851. DOI:https://doi.org/10.1609/aaai.v40i25.39224.