DONG, Yanhao; MIAO, Yubo; LI, Weinan; ZHENG, Xiao; WANG, Chao; WU, Jiesheng; LYU, Feng. Accelerating LLM Inference Throughput via Asynchronous KV Cache Prefetching. Proceedings of the AAAI Conference on Artificial Intelligence, [S. l.], v. 40, n. 25, p. 20844–20851, 2026. DOI: 10.1609/aaai.v40i25.39224. Disponível em: https://ojs.aaai.org/index.php/AAAI/article/view/39224. Acesso em: 13 may. 2026.