[1]
L. Long, R. Yang, Y. Huang, D. Hui, A. Zhou, and J. Yang, “SlimInfer: Accelerating Long-Context LLM Inference via Dynamic Token Pruning”, AAAI, vol. 40, no. 38, pp. 32284–32292, Mar. 2026.