SKDBERT: Compressing BERT via Stochastic Knowledge Distillation

Authors

  • Zixiang Ding Meituan
  • Guoqing Jiang Meituan
  • Shuai Zhang Meituan
  • Lin Guo Meituan
  • Wei Lin Individual

DOI:

https://doi.org/10.1609/aaai.v37i6.25902

Keywords:

ML: Learning on the Edge & Model Compression, CV: Learning & Optimization for CV, CV: Representation Learning for Vision, SNLP: Language Models, SNLP: Learning & Optimization for SNLP, SNLP: Text Classification

Abstract

In this paper, we propose Stochastic Knowledge Distillation (SKD) to obtain compact BERT-style language model dubbed SKDBERT. In each distillation iteration, SKD samples a teacher model from a pre-defined teacher team, which consists of multiple teacher models with multi-level capacities, to transfer knowledge into student model in an one-to-one manner. Sampling distribution plays an important role in SKD. We heuristically present three types of sampling distributions to assign appropriate probabilities for multi-level teacher models. SKD has two advantages: 1) it can preserve the diversities of multi-level teacher models via stochastically sampling single teacher model in each distillation iteration, and 2) it can also improve the efficacy of knowledge distillation via multi-level teacher models when large capacity gap exists between the teacher model and the student model. Experimental results on GLUE benchmark show that SKDBERT reduces the size of a BERT model by 40% while retaining 99.5% performances of language understanding and being 100% faster.

Downloads

Published

2023-06-26

How to Cite

Ding, Z., Jiang, G., Zhang, S., Guo, L., & Lin, W. (2023). SKDBERT: Compressing BERT via Stochastic Knowledge Distillation. Proceedings of the AAAI Conference on Artificial Intelligence, 37(6), 7414-7422. https://doi.org/10.1609/aaai.v37i6.25902

Issue

Section

AAAI Technical Track on Machine Learning I