Text Embedding Bank for Detailed Image Paragraph Captioning


  • Arjun Gupta University of Illinois at Urbana-Champaign
  • Zengming Shen University of Illinois at Urbana-Champaign
  • Thomas Huang University of Illinois at Urbana-Champaign


Machine Learning, Image Captioning, Natural Language Processing


Existing deep learning-based models for image captioning typically consist of an image encoder to extract visual features and a language model decoder, an architecture that has shown promising results in single high-level sentence generation. However, only the word-level guiding signal is available when the image encoder is optimized to extract visual features. The inconsistency between the parallel extraction of visual features and sequential text supervision limits its success when the length of the generated text is long (more than 50 words). We propose a new module, called the Text Embedding Bank (TEB), to address this problem for image paragraph captioning. This module uses the paragraph vector model to learn fixed-length feature representations from a variable-length paragraph. We refer to the fixed-length feature as the TEB. This TEB module plays two roles to benefit paragraph captioning performance. First, it acts as a form of global and coherent deep supervision to regularize visual feature extraction in the image encoder. Second, it acts as a distributed memory to provide features of the whole paragraph to the language model, which alleviates the long-term dependency problem. Adding this module to two existing state-of-the-art methods achieves a new state-of-the-art result on the paragraph captioning Stanford Visual Genome dataset.




How to Cite

Gupta, A., Shen, Z., & Huang, T. (2021). Text Embedding Bank for Detailed Image Paragraph Captioning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(18), 15791-15792. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17892



AAAI Student Abstract and Poster Program