Improving Image Captioning by Leveraging Intra- and Inter-layer Global Representation in Transformer Network

Authors

  • Jiayi Ji Media Analytics and Computing Lab, Department of Artificial Intelligence, School of Informatics, Xiamen University, 361005, China
  • Yunpeng Luo Media Analytics and Computing Lab, Department of Artificial Intelligence, School of Informatics, Xiamen University, 361005, China
  • Xiaoshuai Sun Media Analytics and Computing Lab, Department of Artificial Intelligence, School of Informatics, Xiamen University, 361005, China Institute of Artificial Intelligence, Xiamen University
  • Fuhai Chen Media Analytics and Computing Lab, Department of Artificial Intelligence, School of Informatics, Xiamen University, 361005, China
  • Gen Luo Media Analytics and Computing Lab, Department of Artificial Intelligence, School of Informatics, Xiamen University, 361005, China
  • Yongjian Wu Tencent Youtu Lab
  • Yue Gao Tsinghua University
  • Rongrong Ji Media Analytics and Computing Lab, Department of Artificial Intelligence, School of Informatics, Xiamen University, 361005, China Institute of Artificial Intelligence, Xiamen University

DOI:

https://doi.org/10.1609/aaai.v35i2.16258

Keywords:

Multi-modal Vision, Language and Vision

Abstract

Transformer-based architectures have shown great success in image captioning, where object regions are encoded and then attended into the vectorial representations to guide the caption decoding. However, such vectorial representations only contain region-level information without considering the global information reflecting the entire image, which fails to expand the capability of complex multi-modal reasoning in image captioning. In this paper, we introduce a Global Enhanced Transformer (termed GET) to enable the extraction of a more comprehensive global representation, and then adaptively guide the decoder to generate high-quality captions. In GET, a Global Enhanced Encoder is designed for the embedding of the global feature, and a Global Adaptive Decoder are designed for the guidance of the caption generation. The former models intra- and inter-layer global representation by taking advantage of the proposed Global Enhanced Attention and a layer-wise fusion module. The latter contains a Global Adaptive Controller that can adaptively fuse the global information into the decoder to guide the caption generation. Extensive experiments on MS COCO dataset demonstrate the superiority of our GET over many state-of-the-arts.

Downloads

Published

2021-05-18

How to Cite

Ji, J., Luo, Y., Sun, X., Chen, F., Luo, G., Wu, Y., Gao, Y., & Ji, R. (2021). Improving Image Captioning by Leveraging Intra- and Inter-layer Global Representation in Transformer Network. Proceedings of the AAAI Conference on Artificial Intelligence, 35(2), 1655-1663. https://doi.org/10.1609/aaai.v35i2.16258

Issue

Section

AAAI Technical Track on Computer Vision I