Stack-Captioning: Coarse-to-Fine Learning for Image Captioning

Authors

  • Jiuxiang Gu Nanyang Technological University
  • Jianfei Cai Nanyang Technological University
  • Gang Wang Alibaba AI Labs
  • Tsuhan Chen Nanyang Technological University

DOI:

https://doi.org/10.1609/aaai.v32i1.12266

Keywords:

VIS

Abstract

The existing image captioning approaches typically train a one-stage sentence decoder, which is difficult to generate rich fine-grained descriptions. On the other hand, multi-stage image caption model is hard to train due to the vanishing gradient problem. In this paper, we propose a coarse-to-fine multi-stage prediction framework for image captioning, composed of multiple decoders each of which operates on the output of the previous stage, producing increasingly refined image descriptions. Our proposed learning approach addresses the difficulty of vanishing gradients during training by providing a learning objective function that enforces intermediate supervisions. Particularly, we optimize our model with a reinforcement learning approach which utilizes the output of each intermediate decoder's test-time inference algorithm as well as the output of its preceding decoder to normalize the rewards, which simultaneously solves the well-known exposure bias problem and the loss-evaluation mismatch problem. We extensively evaluate the proposed approach on MSCOCO and show that our approach can achieve the state-of-the-art performance.

Downloads

Published

2018-04-27

How to Cite

Gu, J., Cai, J., Wang, G., & Chen, T. (2018). Stack-Captioning: Coarse-to-Fine Learning for Image Captioning. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.12266