TY - JOUR AU - Seo, Paul Hongsuck AU - Sharma, Piyush AU - Levinboim, Tomer AU - Han, Bohyung AU - Soricut, Radu PY - 2020/04/03 Y2 - 2024/03/29 TI - Reinforcing an Image Caption Generator Using Off-Line Human Feedback JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 34 IS - 03 SE - AAAI Technical Track: Humans and AI DO - 10.1609/aaai.v34i03.5655 UR - https://ojs.aaai.org/index.php/AAAI/article/view/5655 SP - 2693-2700 AB - <p>Human ratings are currently the most accurate way to assess the quality of an image captioning model, yet most often the only used outcome of an expensive human rating evaluation is a few overall statistics over the evaluation dataset. In this paper, we show that the signal from <em>instance-level</em> human caption ratings can be leveraged to improve captioning models, even when the amount of caption ratings is several orders of magnitude less than the caption training data. We employ a policy gradient method to maximize the human ratings as rewards in an off-policy reinforcement learning setting, where policy gradients are estimated by samples from a distribution that focuses on the captions in a caption ratings dataset. Our empirical evidence indicates that the proposed method learns to generalize the human raters' judgments to a previously unseen set of images, as judged by a different set of human judges, and additionally on a different, multi-dimensional side-by-side human evaluation procedure.</p> ER -