Reinforced Curriculum Learning on Pre-Trained Neural Machine Translation Models

Authors

  • Mingjun Zhao University of Alberta
  • Haijiang Wu Tencent
  • Di Niu University of Alberta
  • Xiaoli Wang Tencent

DOI:

https://doi.org/10.1609/aaai.v34i05.6513

Abstract

The competitive performance of neural machine translation (NMT) critically relies on large amounts of training data. However, acquiring high-quality translation pairs requires expert knowledge and is costly. Therefore, how to best utilize a given dataset of samples with diverse quality and characteristics becomes an important yet understudied question in NMT. Curriculum learning methods have been introduced to NMT to optimize a model's performance by prescribing the data input order, based on heuristics such as the assessment of noise and difficulty levels. However, existing methods require training from scratch, while in practice most NMT models are pre-trained on big data already. Moreover, as heuristics, they do not generalize well. In this paper, we aim to learn a curriculum for improving a pre-trained NMT model by re-selecting influential data samples from the original training set and formulate this task as a reinforcement learning problem. Specifically, we propose a data selection framework based on Deterministic Actor-Critic, in which a critic network predicts the expected change of model performance due to a certain sample, while an actor network learns to select the best sample out of a random batch of samples presented to it. Experiments on several translation datasets show that our method can further improve the performance of NMT when original batch training reaches its ceiling, without using additional new training data, and significantly outperforms several strong baseline methods.

Downloads

Published

2020-04-03

How to Cite

Zhao, M., Wu, H., Niu, D., & Wang, X. (2020). Reinforced Curriculum Learning on Pre-Trained Neural Machine Translation Models. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 9652-9659. https://doi.org/10.1609/aaai.v34i05.6513

Issue

Section

AAAI Technical Track: Natural Language Processing