Object Relation Attention for Image Paragraph Captioning

Authors

  • Li-Chuan Yang National Taiwan University
  • Chih-Yuan Yang National Taiwan University
  • Jane Yung-jen Hsu National Taiwan University

DOI:

https://doi.org/10.1609/aaai.v35i4.16423

Keywords:

Visual Reasoning & Symbolic Representations

Abstract

Image paragraph captioning aims to automatically generate a paragraph from a given image. It is an extension of image captioning in terms of generating multiple sentences instead of a single one, and it is more challenging because paragraphs are longer, more informative, and more linguistically complicated. Because a paragraph consists of several sentences, an effective image paragraph captioning method should generate consistent sentences rather than contradictory ones. It is still an open question how to achieve this goal, and for it we propose a method to incorporate objects' spatial coherence into a language-generating model. For every two overlapping objects, the proposed method concatenates their raw visual features to create two directional pair features and learns weights optimizing those pair features as relation-aware object features for a language-generating model. Experimental results show that the proposed network extracts effective object features for image paragraph captioning and achieves promising performance against existing methods.

Downloads

Published

2021-05-18

How to Cite

Yang, L.-C., Yang, C.-Y., & Hsu, J. Y.- jen. (2021). Object Relation Attention for Image Paragraph Captioning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(4), 3136-3144. https://doi.org/10.1609/aaai.v35i4.16423

Issue

Section

AAAI Technical Track on Computer Vision III