Zero-Resource Neural Machine Translation with Multi-Agent Communication Game

Authors

  • Yun Chen The University of Hong Kong
  • Yang Liu Tsinghua University
  • Victor Li The University of Hong Kong

DOI:

https://doi.org/10.1609/aaai.v32i1.11976

Keywords:

NMT, zero-resource, multimodal

Abstract

While end-to-end neural machine translation (NMT) has achieved notable success in the past years in translating a handful of resource-rich language pairs, it still suffers from the data scarcity problem for low-resource language pairs and domains. To tackle this problem, we propose an interactive multimodal framework for zero-resource neural machine translation. Instead of being passively exposed to large amounts of parallel corpora, our learners (implemented as encoder-decoder architecture) engage in cooperative image description games, and thus develop their own image captioning or neural machine translation model from the need to communicate in order to succeed at the game. Experimental results on the IAPR-TC12 and Multi30K datasets show that the proposed learning mechanism significantly improves over the state-of-the-art methods.

Downloads

Published

2018-04-27

How to Cite

Chen, Y., Liu, Y., & Li, V. (2018). Zero-Resource Neural Machine Translation with Multi-Agent Communication Game. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11976