Dual-level Collaborative Transformer for Image Captioning

Authors

  • Yunpeng Luo Xiamen University
  • Jiayi Ji Xiamen University
  • Xiaoshuai Sun Xiamen University
  • Liujuan Cao Xiamen University
  • Yongjian Wu Tencent Youtu Lab
  • Feiyue Huang Tencent Youtu Lab
  • Chia-Wen Lin National Tsing Hua University
  • Rongrong Ji Xiamen University, China

Keywords:

Language and Vision

Abstract

Descriptive region features extracted by object detection networks have played an important role in the recent advancements of image captioning. However, they are still criticized for the lack of contextual information and fine-grained details, which in contrast are the merits of traditional grid features. In this paper, we introduce a novel Dual-Level Collaborative Transformer (DLCT) network to realize the complementary advantages of the two features. Concretely, in DLCT, these two features are first processed by a novel Dual-way Self Attenion (DWSA) to mine their intrinsic properties, where a Comprehensive Relation Attention component is also introduced to embed the geometric information. In addition, we propose a Locality-Constrained Cross Attention module to address the semantic noises caused by the direct fusion of these two features, where a geometric alignment graph is constructed to accurately align and reinforce region and grid features. To validate our model, we conduct extensive experiments on the highly competitive MS-COCO dataset, and achieve new state-of-the-art performance on both local and online test sets, i.e., 133.8% CIDEr on Karpathy split and 135.4% CIDEr on the official split.

Downloads

Published

2021-05-18

How to Cite

Luo, Y., Ji, J., Sun, X., Cao, L., Wu, Y., Huang, F., Lin, C.-W., & Ji, R. (2021). Dual-level Collaborative Transformer for Image Captioning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(3), 2286-2293. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16328

Issue

Section

AAAI Technical Track on Computer Vision II