Unsupervised Bilingual Lexicon Induction from Mono-Lingual Multimodal Data

Authors

  • Shizhe Chen Renmin University of China
  • Qin Jin Renmin University of China
  • Alexander Hauptmann Carnegie Mellon University

DOI:

https://doi.org/10.1609/aaai.v33i01.33018207

Abstract

Bilingual lexicon induction, translating words from the source language to the target language, is a long-standing natural language processing task. Recent endeavors prove that it is promising to employ images as pivot to learn the lexicon induction without reliance on parallel corpora. However, these vision-based approaches simply associate words with entire images, which are constrained to translate concrete words and require object-centered images. We humans can understand words better when they are within a sentence with context. Therefore, in this paper, we propose to utilize images and their associated captions to address the limitations of previous approaches. We propose a multi-lingual caption model trained with different mono-lingual multimodal data to map words in different languages into joint spaces. Two types of word representation are induced from the multi-lingual caption model: linguistic features and localized visual features. The linguistic feature is learned from the sentence contexts with visual semantic constraints, which is beneficial to learn translation for words that are less visual-relevant. The localized visual feature is attended to the region in the image that correlates to the word, so that it alleviates the image restriction for salient visual representation. The two types of features are complementary for word translation. Experimental results on multiple language pairs demonstrate the effectiveness of our proposed method, which substantially outperforms previous vision-based approaches without using any parallel sentences or supervision of seed word pairs.

Downloads

Published

2019-07-17

How to Cite

Chen, S., Jin, Q., & Hauptmann, A. (2019). Unsupervised Bilingual Lexicon Induction from Mono-Lingual Multimodal Data. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 8207-8214. https://doi.org/10.1609/aaai.v33i01.33018207

Issue

Section

AAAI Technical Track: Vision