Attention-Aligned Transformer for Image Captioning

Authors

  • Zhengcong Fei Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology University of Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v36i1.19940

Keywords:

Computer Vision (CV)

Abstract

Recently, attention-based image captioning models, which are expected to ground correct image regions for proper word generations, have achieved remarkable performance. However, some researchers have argued “deviated focus” problem of existing attention mechanisms in determining the effective and influential image features. In this paper, we present A2 - an attention-aligned Transformer for image captioning, which guides attention learning in a perturbation-based self-supervised manner, without any annotation overhead. Specifically, we add mask operation on image regions through a learnable network to estimate the true function in ultimate description generation. We hypothesize that the necessary image region features, where small disturbance causes an obvious performance degradation, deserve more attention weight. Then, we propose four aligned strategies to use this information to refine attention weight distribution. Under such a pattern, image regions are attended correctly with the output words. Extensive experiments conducted on the MS COCO dataset demonstrate that the proposed A2 Transformer consistently outperforms baselines in both automatic metrics and human evaluation. Trained models and code for reproducing the experiments are publicly available.

Downloads

Published

2022-06-28

How to Cite

Fei, Z. (2022). Attention-Aligned Transformer for Image Captioning. Proceedings of the AAAI Conference on Artificial Intelligence, 36(1), 607-615. https://doi.org/10.1609/aaai.v36i1.19940

Issue

Section

AAAI Technical Track on Computer Vision I