Object Relation Attention for Image Paragraph Captioning
Keywords:Visual Reasoning & Symbolic Representations
AbstractImage paragraph captioning aims to automatically generate a paragraph from a given image. It is an extension of image captioning in terms of generating multiple sentences instead of a single one, and it is more challenging because paragraphs are longer, more informative, and more linguistically complicated. Because a paragraph consists of several sentences, an effective image paragraph captioning method should generate consistent sentences rather than contradictory ones. It is still an open question how to achieve this goal, and for it we propose a method to incorporate objects' spatial coherence into a language-generating model. For every two overlapping objects, the proposed method concatenates their raw visual features to create two directional pair features and learns weights optimizing those pair features as relation-aware object features for a language-generating model. Experimental results show that the proposed network extracts effective object features for image paragraph captioning and achieves promising performance against existing methods.
How to Cite
Yang, L.-C., Yang, C.-Y., & Hsu, J. Y.- jen. (2021). Object Relation Attention for Image Paragraph Captioning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(4), 3136-3144. https://doi.org/10.1609/aaai.v35i4.16423
AAAI Technical Track on Computer Vision III