DirectQE: Direct Pretraining for Machine Translation Quality Estimation

Authors

  • Qu Cui National Key Laboratory for Novel Software Technology, Nanjing University
  • Shujian Huang National Key Laboratory for Novel Software Technology, Nanjing University
  • Jiahuan Li National Key Laboratory for Novel Software Technology, Nanjing University
  • Xiang Geng National Key Laboratory for Novel Software Technology, Nanjing University
  • Zaixiang Zheng National Key Laboratory for Novel Software Technology, Nanjing University
  • Guoping Huang Tencent AI Lab
  • Jiajun Chen National Key Laboratory for Novel Software Technology, Nanjing University

DOI:

https://doi.org/10.1609/aaai.v35i14.17506

Keywords:

Machine Translation & Multilinguality

Abstract

Machine Translation Quality Estimation (QE) is a task of predicting the quality of machine translations without relying on any reference. Recently, the predictor-estimator framework trains the predictor as a feature extractor, which leverages the extra parallel corpora without QE labels, achieving promising QE performance. However, we argue that there are gaps between the predictor and the estimator in both data quality and training objectives, which preclude QE models from benefiting from a large number of parallel corpora more directly. We propose a novel framework called DirectQE that provides a direct pretraining for QE tasks. In DirectQE, a generator is trained to produce pseudo data that is closer to the real QE data, and a detector is pretrained on these data with novel objectives that are akin to the QE task. Experiments on widely used benchmarks show that DirectQE outperforms existing methods, without using any pretraining models such as BERT. We also give extensive analyses showing how fixing the two gaps contributes to our improvements.

Downloads

Published

2021-05-18

How to Cite

Cui, Q., Huang, S., Li, J., Geng, X., Zheng, Z., Huang, G., & Chen, J. (2021). DirectQE: Direct Pretraining for Machine Translation Quality Estimation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14), 12719-12727. https://doi.org/10.1609/aaai.v35i14.17506

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing I