UNISON: Unpaired Cross-Lingual Image Captioning


  • Jiahui Gao The University of Hong Kong
  • Yi Zhou Johns Hopkins University
  • Philip L. H. Yu The Education University of Hong Kong
  • Shafiq Joty Nanyang Technological University
  • Jiuxiang Gu Adobe Research




Speech & Natural Language Processing (SNLP)


Image captioning has emerged as an interesting research field in recent years due to its broad application scenarios. The traditional paradigm of image captioning relies on paired image-caption datasets to train the model in a supervised manner. However, creating such paired datasets for every target language is prohibitively expensive, which hinders the extensibility of captioning technology and deprives a large part of the world population of its benefit. In this work, we present a novel unpaired cross-lingual method to generate image captions without relying on any caption corpus in the source or the target language. Specifically, our method consists of two phases: (1) a cross-lingual auto-encoding process, which utilizing a sentence parallel (bitext) corpus to learn the mapping from the source to the target language in the scene graph encoding space and decode sentences in the target language, and (2) a cross-modal unsupervised feature mapping, which seeks to map the encoded scene graph features from image modality to language modality. We verify the effectiveness of our proposed method on the Chinese image caption generation task. The comparisons against several existing methods demonstrate the effectiveness of our approach.




How to Cite

Gao, J., Zhou, Y., Yu, P. L. H., Joty, S., & Gu, J. (2022). UNISON: Unpaired Cross-Lingual Image Captioning. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10), 10654-10662. https://doi.org/10.1609/aaai.v36i10.21310



AAAI Technical Track on Speech and Natural Language Processing