Acquiring Knowledge from Pre-Trained Model to Neural Machine Translation


  • Rongxiang Weng Alibaba Group
  • Heng Yu Alibaba Group
  • Shujian Huang Nanjing University
  • Shanbo Cheng Alibaba Group
  • Weihua Luo Alibaba Group



Pre-training and fine-tuning have achieved great success in natural language process field. The standard paradigm of exploiting them includes two steps: first, pre-training a model, e.g. BERT, with a large scale unlabeled monolingual data. Then, fine-tuning the pre-trained model with labeled data from downstream tasks. However, in neural machine translation (NMT), we address the problem that the training objective of the bilingual task is far different from the monolingual pre-trained model. This gap leads that only using fine-tuning in NMT can not fully utilize prior language knowledge. In this paper, we propose an Apt framework for acquiring knowledge from pre-trained model to NMT. The proposed approach includes two modules: 1). a dynamic fusion mechanism to fuse task-specific features adapted from general knowledge into NMT network, 2). a knowledge distillation paradigm to learn language knowledge continuously during the NMT training process. The proposed approach could integrate suitable knowledge from pre-trained models to improve the NMT. Experimental results on WMT English to German, German to English and Chinese to English machine translation tasks show that our model outperforms strong baselines and the fine-tuning counterparts.




How to Cite

Weng, R., Yu, H., Huang, S., Cheng, S., & Luo, W. (2020). Acquiring Knowledge from Pre-Trained Model to Neural Machine Translation. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 9266-9273.



AAAI Technical Track: Natural Language Processing