Steering Pretrained Drafters During Speculative Decoding

Authors

  • Frédéric Berdoz ETH Zurich
  • Peer Rheinboldt ETH Zurich
  • Roger Wattenhofer ETH Zurich

DOI:

https://doi.org/10.1609/aaai.v40i36.40255

Abstract

Speculative decoding accelerates language model inference by separating generation into fast drafting and parallel verification. Its main limitation is drafter–verifier misalignment, which limits token acceptance and reduces overall effectiveness. While small drafting heads trained from scratch compensate with speed, they struggle when verification dominates latency or when inputs are out of distribution. In contrast, pretrained drafters, though slower, achieve higher acceptance rates thanks to stronger standalone generation capabilities, making them competitive when drafting latency is negligible relative to verification or communication overhead. In this work, we aim to improve the acceptance rates of pretrained drafters by introducing a lightweight dynamic alignment mechanism: a steering vector computed from the verifier’s hidden states and injected into the pretrained drafter. Compared to existing offline alignment methods such as distillation, our approach boosts the number of accepted tokens by up to 35% under standard sampling and 22% under greedy sampling, all while incurring negligible computational overhead. Importantly, our approach can be retrofitted to existing architectures and pretrained models, enabling rapid adoption.

Published

2026-03-14

How to Cite

Berdoz, F., Rheinboldt, P., & Wattenhofer, R. (2026). Steering Pretrained Drafters During Speculative Decoding. Proceedings of the AAAI Conference on Artificial Intelligence, 40(36), 30067-30075. https://doi.org/10.1609/aaai.v40i36.40255

Issue

Section

AAAI Technical Track on Natural Language Processing I