[1]
Long, L. et al. 2026. SlimInfer: Accelerating Long-Context LLM Inference via Dynamic Token Pruning. Proceedings of the AAAI Conference on Artificial Intelligence. 40, 38 (Mar. 2026), 32284–32292. DOI:https://doi.org/10.1609/aaai.v40i38.40502.