Towards Reliable Neural Machine Translation with Consistency-Aware Meta-Learning
Keywords:SNLP: Machine Translation & Multilinguality, SNLP: Generation
AbstractNeural machine translation (NMT) has achieved remarkable success in producing high-quality translations. However, current NMT systems suffer from a lack of reliability, as their outputs that are often affected by lexical or syntactic changes in inputs, resulting in large variations in quality. This limitation hinders the practicality and trustworthiness of NMT. A contributing factor to this problem is that NMT models trained with the one-to-one paradigm struggle to handle the source diversity phenomenon, where inputs with the same meaning can be expressed differently. In this work, we treat this problem as a bilevel optimization problem and present a consistency-aware meta-learning (CAML) framework derived from the model-agnostic meta-learning (MAML) algorithm to address it. Specifically, the NMT model with CAML (named CoNMT) first learns a consistent meta representation of semantically equivalent sentences in the outer loop. Subsequently, a mapping from the meta representation to the output sentence is learned in the inner loop, allowing the NMT model to translate semantically equivalent sentences to the same target sentence. We conduct experiments on the NIST Chinese to English task, three WMT translation tasks, and the TED M2O task. The results demonstrate that CoNMT effectively improves overall translation quality and reliably handles diverse inputs.
How to Cite
Weng, R., Wang, Q., Cheng, W., Zhu, C., & Zhang, M. (2023). Towards Reliable Neural Machine Translation with Consistency-Aware Meta-Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 13709-13717. https://doi.org/10.1609/aaai.v37i11.26606
AAAI Technical Track on Speech & Natural Language Processing