Re-SpS: A Reinforcement Learning Approach to Speculative Sampling
DOI:
https://doi.org/10.1609/aaai.v40i39.40625Abstract
Inference time latency has remained an open challenge for real world applications of large language models (LLMs). State-of-the-art (SOTA) speculative sampling (SpS) methods for LLMs, like EAGLE-3, use tree-based drafting to explore multiple candidate continuations in parallel. However, the hyperparameters controlling the tree structure are static, which limits flexibility and efficiency across diverse contexts and domains. We introduce Reinforcement learning for Speculative Sampling (Re-SpS), the first reinforcement learning (RL)-based framework for draft tree hyperparameter optimization. Re-SpS dynamically adjusts draft tree hyperparameters in real-time, learning context-aware policies that maximize generation speed by balancing speculative aggression with computational overhead. It leverages efficient state representations from target model hidden states and introduces multi-step action persistence for better context modeling. Evaluation results across five diverse benchmarks demonstrate consistent improvements over the SOTA method EAGLE-3, achieving up to 5.45x speedup over the backbone LLM and up to 1.12x speedup compared to EAGLE-3 across five diverse benchmarks, with no loss in output fidelity.Downloads
Published
2026-03-14
How to Cite
Wang, C., Shi, D. H., & Chen, H. (2026). Re-SpS: A Reinforcement Learning Approach to Speculative Sampling. Proceedings of the AAAI Conference on Artificial Intelligence, 40(39), 33386–33394. https://doi.org/10.1609/aaai.v40i39.40625
Issue
Section
AAAI Technical Track on Natural Language Processing IV