(1)
Long, L.; Yang, R.; Huang, Y.; Hui, D.; Zhou, A.; Yang, J. SlimInfer: Accelerating Long-Context LLM Inference via Dynamic Token Pruning. AAAI 2026, 40, 32284-32292.