Leveraging Weighted Cross-Graph Attention for Visual and Semantic Enhanced Video Captioning Network

Authors

  • Deepali Verma Department of Computer Science and Engineering, IIT (BHU), Varanasi
  • Arya Haldar Department of Computer Science and Engineering, IIT (BHU), Varanasi
  • Tanima Dutta Department of Computer Science and Engineering, IIT (BHU), Varanasi

DOI:

https://doi.org/10.1609/aaai.v37i2.25343

Keywords:

CV: Language and Vision, CMS: Analogical and Conceptual Reasoning, CV: Visual Reasoning & Symbolic Representations, ML: Deep Neural Architectures

Abstract

Video captioning has become a broad and interesting research area. Attention-based encoder-decoder methods are extensively used for caption generation. However, these methods mostly utilize the visual attentive feature to highlight the video regions while overlooked the semantic features of the available captions. These semantic features contain significant information that helps to generate highly informative human description-like captions. Therefore, we propose a novel visual and semantic enhanced video captioning network, named as VSVCap, that efficiently utilizes multiple ground truth captions. We aim to generate captions that are visually and semantically enhanced by exploiting both video and text modalities. To achieve this, we propose a fine-grained cross-graph attention mechanism that captures detailed graph embedding correspondence between visual graphs and textual knowledge graphs. We have performed node-level matching and structure-level reasoning between the weighted regional graph and knowledge graph. The proposed network achieves promising results on three benchmark datasets, i.e., YouTube2Text, MSR-VTT, and VATEX. The experimental results show that our network accurately captures all key objects, relationships, and semantically enhanced events of a video to generate human annotation-like captions.

Downloads

Published

2023-06-26

How to Cite

Verma, D., Haldar, A., & Dutta, T. (2023). Leveraging Weighted Cross-Graph Attention for Visual and Semantic Enhanced Video Captioning Network. Proceedings of the AAAI Conference on Artificial Intelligence, 37(2), 2465-2473. https://doi.org/10.1609/aaai.v37i2.25343

Issue

Section

AAAI Technical Track on Computer Vision II