MAGIC: Multimodal relAtional Graph adversarIal inferenCe for Diverse and Unpaired Text-Based Image Captioning

Authors

  • Wenqiao Zhang Zhejiang University
  • Haochen Shi Université de Montréal
  • Jiannan Guo Zhejiang University
  • Shengyu Zhang Zhejiang University
  • Qingpeng Cai National University of Singapore
  • Juncheng Li Zhejiang University
  • Sihui Luo Ningbo University
  • Yueting Zhuang Zhejiang University

DOI:

https://doi.org/10.1609/aaai.v36i3.20243

Keywords:

Computer Vision (CV)

Abstract

Text-based image captioning (TextCap) requires simultaneous comprehension of visual content and reading the text of images to generate a natural language description. Although a task can teach machines to understand the complex human environment further given that text is omnipresent in our daily surroundings, it poses additional challenges in normal captioning. A text-based image intuitively contains abundant and complex multimodal relational content, that is, image details can be described diversely from multiview rather than a single caption. Certainly, we can introduce additional paired training data to show the diversity of images' descriptions, this process is labor-intensive and time-consuming for TextCap pair annotations with extra texts. Based on the insight mentioned above, we investigate how to generate diverse captions that focus on different image parts using an unpaired training paradigm. We propose the Multimodal relAtional Graph adversarIal InferenCe (MAGIC) framework for diverse and unpaired TextCap. This framework can adaptively construct multiple multimodal relational graphs of images and model complex relationships among graphs to represent descriptive diversity. Moreover, a cascaded generative adversarial network is developed from modeled graphs to infer the unpaired caption generation in image–sentence feature alignment and linguistic coherence levels. We validate the effectiveness of MAGIC in generating diverse captions from different relational information items of an image. Experimental results show that MAGIC can generate very promising outcomes without using any image–caption training pairs.

Downloads

Published

2022-06-28

How to Cite

Zhang, W., Shi, H., Guo, J., Zhang, S., Cai, Q., Li, J., Luo, S., & Zhuang, Y. (2022). MAGIC: Multimodal relAtional Graph adversarIal inferenCe for Diverse and Unpaired Text-Based Image Captioning. Proceedings of the AAAI Conference on Artificial Intelligence, 36(3), 3335-3343. https://doi.org/10.1609/aaai.v36i3.20243

Issue

Section

AAAI Technical Track on Computer Vision III