Long, Lingkun, et al. “SlimInfer: Accelerating Long-Context LLM Inference via Dynamic Token Pruning”. Proceedings of the AAAI Conference on Artificial Intelligence, vol. 40, no. 38, Mar. 2026, pp. 32284-92, doi:10.1609/aaai.v40i38.40502.