Semantic Grouping Network for Video Captioning

Authors

  • Hobin Ryu KAIST
  • Sunghun Kang KAIST
  • Haeyong Kang KAIST
  • Chang D. Yoo KAIST

Keywords:

Language and Vision

Abstract

This paper considers a video caption generating network referred to as Semantic Grouping Network (SGN) that attempts (1) to group video frames with discriminating word phrases of partially decoded caption and then (2) to decode those semantically aligned groups in predicting the next word. As consecutive frames are not likely to provide unique information, prior methods have focused on discarding or merging repetitive information based only on the input video. The SGN learns an algorithm to capture the most discriminating word phrases of the partially decoded caption and a mapping that associates each phrase to the relevant video frames - establishing this mapping allows semantically related frames to be clustered, which reduces redundancy. In contrast to the prior methods, the continuous feedback from decoded words enables the SGN to dynamically update the video representation that adapts to the partially decoded caption. Furthermore, a contrastive attention loss is proposed to facilitate accurate alignment between a word phrase and video frames without manual annotations. The SGN achieves state-of-the-art performances by outperforming runner-up methods by a margin of 2.1%p and 2.4%p in a CIDEr-D score on MSVD and MSR-VTT datasets, respectively. Extensive experiments demonstrate the effectiveness and interpretability of the SGN.

Downloads

Published

2021-05-18

How to Cite

Ryu, H., Kang, S., Kang, H., & Yoo, C. D. (2021). Semantic Grouping Network for Video Captioning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(3), 2514-2522. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16353

Issue

Section

AAAI Technical Track on Computer Vision II