Reinforcing an Image Caption Generator Using Off-Line Human Feedback

Authors

  • Paul Hongsuck Seo POSTECH
  • Piyush Sharma Google Research
  • Tomer Levinboim Google Research
  • Bohyung Han Seoul National University
  • Radu Soricut Google Research

DOI:

https://doi.org/10.1609/aaai.v34i03.5655

Abstract

Human ratings are currently the most accurate way to assess the quality of an image captioning model, yet most often the only used outcome of an expensive human rating evaluation is a few overall statistics over the evaluation dataset. In this paper, we show that the signal from instance-level human caption ratings can be leveraged to improve captioning models, even when the amount of caption ratings is several orders of magnitude less than the caption training data. We employ a policy gradient method to maximize the human ratings as rewards in an off-policy reinforcement learning setting, where policy gradients are estimated by samples from a distribution that focuses on the captions in a caption ratings dataset. Our empirical evidence indicates that the proposed method learns to generalize the human raters' judgments to a previously unseen set of images, as judged by a different set of human judges, and additionally on a different, multi-dimensional side-by-side human evaluation procedure.

Downloads

Published

2020-04-03

How to Cite

Seo, P. H., Sharma, P., Levinboim, T., Han, B., & Soricut, R. (2020). Reinforcing an Image Caption Generator Using Off-Line Human Feedback. Proceedings of the AAAI Conference on Artificial Intelligence, 34(03), 2693-2700. https://doi.org/10.1609/aaai.v34i03.5655

Issue

Section

AAAI Technical Track: Humans and AI