1.
Long L, Yang R, Huang Y, Hui D, Zhou A, Yang J. SlimInfer: Accelerating Long-Context LLM Inference via Dynamic Token Pruning. AAAI [Internet]. 2026 Mar. 14 [cited 2026 May 15];40(38):32284-92. Available from: https://ojs.aaai.org/index.php/AAAI/article/view/40502