Learning to Augment for Data-scarce Domain BERT Knowledge Distillation


  • Lingyun Feng Tsinghua University
  • Minghui Qiu Alibaba Group
  • Yaliang Li Alibaba Group
  • Hai-Tao Zheng Tsinghua University
  • Ying Shen Sun Yat-Sen University




Transfer/Adaptation/Multi-task/Meta/Automated Learning


Despite pre-trained language models such as BERT have achieved appealing performance in a wide range of Natural Language Processing (NLP) tasks, they are computationally expensive to be deployed in real-time applications. A typical method is to adopt knowledge distillation to compress these large pre-trained models (teacher models) to small student models. However, for a target domain with scarce training data, the teacher can hardly pass useful knowledge to the student, which yields performance degradation for the student models. To tackle this problem, we propose a method to learn to augment data for BERT Knowledge Distillation in target domains with scarce labeled data, by learning a cross-domain manipulation scheme that automatically augments the target domain with the help of resource-rich source domains. Specifically, the proposed method generates samples acquired from a stationary distribution near the target data and adopts a reinforced controller to automatically refine the augmentation strategy according to the performance of the student. Extensive experiments demonstrate that the proposed method significantly outperforms state-of-the-art baselines on different NLP tasks, and for the data-scarce domains, the compressed student models even perform better than the original large teacher model, with much fewer parameters (only ~13.3%) when only a few labeled examples available.




How to Cite

Feng, L., Qiu, M., Li, Y., Zheng, H.-T., & Shen, Y. (2021). Learning to Augment for Data-scarce Domain BERT Knowledge Distillation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 7422-7430. https://doi.org/10.1609/aaai.v35i8.16910



AAAI Technical Track on Machine Learning I