LRC-BERT: Latent-representation Contrastive Knowledge Distillation for Natural Language Understanding

Authors

  • Hao Fu University of Science and Technology of China
  • Shaojun Zhou Alibaba Group
  • Qihong Yang Alibaba Group
  • Junjie Tang Alibaba Group
  • Guiquan Liu University of Science and Technology of China
  • Kaikui Liu Alibaba Group
  • Xiaolong Li Alibaba Group

Keywords:

Language Models

Abstract

The pre-training models such as BERT have achieved great results in various natural language processing problems. However, a large number of parameters need significant amounts of memory and the consumption of inference time, which makes it difficult to deploy them on edge devices. In this work, we propose a knowledge distillation method LRC-BERT based on contrastive learning to fit the output of the intermediate layer from the angular distance aspect, which is not considered by the existing distillation methods. Furthermore, we introduce a gradient perturbation-based training architecture in the training phase to increase the robustness of LRC-BERT, which is the first attempt in knowledge distillation. Additionally, in order to better capture the distribution characteristics of the intermediate layer, we design a two-stage training method for the total distillation loss. Finally, by verifying 8 datasets on the General Language Understanding Evaluation (GLUE) benchmark, the performance of the proposed LRC-BERT exceeds the existing state-of-the-art methods, which proves the effectiveness of our method.

Downloads

Published

2021-05-18

How to Cite

Fu, H., Zhou, S., Yang, Q., Tang, J., Liu, G., Liu, K., & Li, X. (2021). LRC-BERT: Latent-representation Contrastive Knowledge Distillation for Natural Language Understanding. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14), 12830-12838. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17518

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing I