CTPD: Cross Tokenizer Preference Distillation
DOI:
https://doi.org/10.1609/aaai.v40i44.41114Abstract
While knowledge distillation has seen widespread use in pre-training and instruction tuning, its application to aligning language models with human preferences remains underexplored, particularly in the more realistic cross-tokenizer setting. The incompatibility of tokenization schemes between teacher and student models has largely prevented fine-grained, white-box distillation of preference information. To address this gap, we propose Cross-Tokenizer Preference Distillation (CTPD), the first unified framework for transferring human-aligned behavior between models with heterogeneous tokenizers. CTPD introduces three key innovations: (1) Aligned Span Projection, which maps teacher and student tokens to shared character-level spans for precise supervision transfer; (2) a cross-tokenizer adaptation of Token-level Importance Sampling (TIS-DPO) for improved credit assignment; and (3) a Teacher-Anchored Reference, allowing the student to directly leverage the teacher’s preferences in a DPO-style objective. Our theoretical analysis grounds CTPD in importance sampling, and experiments across multiple benchmarks confirm its effectiveness, with significant performance gains over existing methods. These results establish CTPD as a practical and general solution for preference distillation across diverse tokenization schemes, opening the door to more accessible and efficient alignment of language models.Downloads
Published
2026-03-14
How to Cite
Nguyen, T., Van Dat, P., Nguyen, N., Van, L. N., Le, T., & Nguyen, T. H. (2026). CTPD: Cross Tokenizer Preference Distillation. Proceedings of the AAAI Conference on Artificial Intelligence, 40(44), 37783–37790. https://doi.org/10.1609/aaai.v40i44.41114
Issue
Section
AAAI Special Track on AI Alignment