Evaluating the Cross-Lingual Effectiveness of Massively Multilingual Neural Machine Translation

Authors

  • Aditya Siddhant Google Research
  • Melvin Johnson Google Research
  • Henry Tsai Google Research
  • Naveen Ari Google Research
  • Jason Riesa Google Research
  • Ankur Bapna Google Research
  • Orhan Firat Google Research
  • Karthik Raman Google Research

DOI:

https://doi.org/10.1609/aaai.v34i05.6414

Abstract

The recently proposed massively multilingual neural machine translation (NMT) system has been shown to be capable of translating over 100 languages to and from English within a single model (Aharoni, Johnson, and Firat 2019). Its improved translation performance on low resource languages hints at potential cross-lingual transfer capability for downstream tasks. In this paper, we evaluate the cross-lingual effectiveness of representations from the encoder of a massively multilingual NMT model on 5 downstream classification and sequence labeling tasks covering a diverse set of over 50 languages. We compare against a strong baseline, multilingual BERT (mBERT) (Devlin et al. 2018), in different cross-lingual transfer learning scenarios and show gains in zero-shot transfer in 4 out of these 5 tasks.

Downloads

Published

2020-04-03

How to Cite

Siddhant, A., Johnson, M., Tsai, H., Ari, N., Riesa, J., Bapna, A., Firat, O., & Raman, K. (2020). Evaluating the Cross-Lingual Effectiveness of Massively Multilingual Neural Machine Translation. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 8854-8861. https://doi.org/10.1609/aaai.v34i05.6414

Issue

Section

AAAI Technical Track: Natural Language Processing