Multi-Level Optimal Transport for Universal Cross-Tokenizer Knowledge Distillation on Language Models

Authors

  • Xiao Cui University of Science and Technology of China
  • Mo Zhu Zhejiang University
  • Yulei Qin Tencent
  • Liang Xie Zhejiang University Zhejiang University of Technology
  • Wengang Zhou University of Science and Technology of China
  • Houqiang Li University of Science and Technology of China

DOI:

https://doi.org/10.1609/aaai.v39i22.34543

Abstract

Knowledge distillation (KD) has become a prevalent technique for compressing large language models (LLMs). Existing KD methods are constrained by the need for identical tokenizers (i.e., vocabularies) between teacher and student models, limiting their versatility in handling LLMs of different architecture families. In this paper, we introduce the Multi-Level Optimal Transport (MultiLevelOT), a novel approach that advances the optimal transport for universal cross-tokenizer knowledge distillation. Our method aligns the logit distributions of the teacher and the student at both token and sequence levels using diverse cost matrices, eliminating the need for dimensional or token-by-token correspondence. At the token level, MultiLevelOT integrates both global and local information by jointly optimizing all tokens within a sequence to enhance robustness. At the sequence level, we efficiently capture complex distribution structures of logits via the Sinkhorn distance, which approximates the Wasserstein distance for divergence measures. Extensive experiments on tasks such as extractive QA, generative QA, and summarization demonstrate that the MultiLevelOT outperforms state-of-the-art cross-tokenizer KD methods under various settings. Our approach is robust to different student and teacher models across model families, architectures, and parameter sizes.

Published

2025-04-11

How to Cite

Cui, X., Zhu, M., Qin, Y., Xie, L., Zhou, W., & Li, H. (2025). Multi-Level Optimal Transport for Universal Cross-Tokenizer Knowledge Distillation on Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 39(22), 23724-23732. https://doi.org/10.1609/aaai.v39i22.34543

Issue

Section

AAAI Technical Track on Natural Language Processing I