Distillation Dynamics: Towards Understanding Feature-Based Distillation in Vision Transformers
DOI:
https://doi.org/10.1609/aaai.v40i11.37913Abstract
While feature-based knowledge distillation has proven highly effective for compressing CNNs, these techniques unexpectedly fail when applied to Vision Transformers (ViTs), often performing worse than simple logit-based distillation. We provide the first comprehensive analysis of this phenomenon through a novel analytical framework termed as "distillation dynamics", combining frequency spectrum analysis, information entropy metrics, and activation magnitude tracking. Our investigation reveals that ViTs exhibit a distinctive U-shaped information processing pattern: initial compression followed by expansion. We identify the root cause of negative transfer in feature distillation: a fundamental representational paradigm mismatch between teacher and student models. Through frequency-domain analysis, we show that teacher models employ distributed, high-dimensional encoding strategies in later layers that smaller student models cannot replicate due to limited channel capacity. This mismatch causes late-layer feature alignment to actively harm student performance. Our findings reveal that successful knowledge transfer in ViTs requires moving beyond naive feature mimicry to methods that respect these fundamental representational constraints, providing essential theoretical guidance for designing effective ViTs compression strategies.Published
2026-03-14
How to Cite
Tian, H., Xu, B., & Li, S. (2026). Distillation Dynamics: Towards Understanding Feature-Based Distillation in Vision Transformers. Proceedings of the AAAI Conference on Artificial Intelligence, 40(11), 9520–9528. https://doi.org/10.1609/aaai.v40i11.37913
Issue
Section
AAAI Technical Track on Computer Vision VIII