Temporal-Difference Learning With Sampling Baseline for Image Captioning

Authors

  • Hui Chen Tsinghua University
  • Guiguang Ding Tsinghua University
  • Sicheng Zhao Tsinghua University
  • Jungong Han Lancaster University

DOI:

https://doi.org/10.1609/aaai.v32i1.12263

Keywords:

Image captioning, Reinforcement learning, LSTM

Abstract

The existing methods for image captioning usually train the language model under the cross entropy loss, which results in the exposure bias and inconsistency of evaluation metric. Recent research has shown these two issues can be well addressed by policy gradient method in reinforcement learning domain attributable to its unique capability of directly optimizing the discrete and non-differentiable evaluation metric. In this paper, we utilize reinforcement learning method to train the image captioning model. Specifically, we train our image captioning model to maximize the overall reward of the sentences by adopting the temporal-difference (TD) learning method, which takes the correlation between temporally successive actions into account. In this way, we assign different values to different words in one sampled sentence by a discounted coefficient when back-propagating the gradient with the REINFORCE algorithm, enabling the correlation between actions to be learned. Besides, instead of estimating a "baseline" to normalize the rewards with another network, we utilize the reward of another Monte-Carlo sample as the "baseline" to avoid high variance. We show that our proposed method can improve the quality of generated captions and outperforms the state-of-the-art methods on the benchmark dataset MS COCO in terms of seven evaluation metrics.

Downloads

Published

2018-04-27

How to Cite

Chen, H., Ding, G., Zhao, S., & Han, J. (2018). Temporal-Difference Learning With Sampling Baseline for Image Captioning. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.12263