Adversarial Data Augmentation for Task-Specific Knowledge Distillation of Pre-trained Transformers

Authors

  • Minjia Zhang Microsoft
  • Niranjan Uma Naresh Microsoft
  • Yuxiong He Microsoft

DOI:

https://doi.org/10.1609/aaai.v36i10.21423

Keywords:

Speech & Natural Language Processing (SNLP), Machine Learning (ML)

Abstract

Deep and large pre-trained language models (e.g., BERT, GPT-3) are state-of-the-art for various natural language processing tasks. However, the huge size of these models brings challenges to fine-tuning and online deployment due to latency and cost constraints. Existing knowledge distillation methods reduce the model size, but they may encounter difficulties transferring knowledge from the teacher model to the student model due to the limited data from the downstream tasks. In this work, we propose AD^2, a novel and effective data augmentation approach to improving the task-specific knowledge transfer when compressing large pre-trained transformer models. Different from prior methods, AD^2 performs distillation by using an enhanced training set that contains both original inputs and adversarially perturbed samples that mimic the output distribution from the teacher. Experimental results show that this method allows better transfer of knowledge from the teacher to the student during distillation, producing student models that retain 99.6\% accuracy of the teacher model while outperforming existing task-specific knowledge distillation baselines by 1.2 points on average over a variety of natural language understanding tasks. Moreover, compared with alternative data augmentation methods, such as text-editing-based approaches, AD^2 is up to 28 times faster while achieving comparable or higher accuracy. In addition, when AD^2 is combined with more advanced task-agnostic distillation, we can advance the state-of-the-art performance even more. On top of the encouraging performance, this paper also provides thorough ablation studies and analysis. The discovered interplay between KD and adversarial data augmentation for compressing pre-trained Transformers may further inspire more advanced KD algorithms for compressing even larger scale models.

Downloads

Published

2022-06-28

How to Cite

Zhang, M., Naresh, N. U., & He, Y. (2022). Adversarial Data Augmentation for Task-Specific Knowledge Distillation of Pre-trained Transformers. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10), 11685-11693. https://doi.org/10.1609/aaai.v36i10.21423

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing