Visual Consensus Modeling for Video-Text Retrieval

Authors

  • Shuqiang Cao Shandong University
  • Bairui Wang Meituan
  • Wei Zhang Shandong University
  • Lin Ma Meituan

DOI:

https://doi.org/10.1609/aaai.v36i1.19891

Keywords:

Computer Vision (CV)

Abstract

In this paper, we propose a novel method to mine the commonsense knowledge shared between the video and text modalities for video-text retrieval, namely visual consensus modeling. Different from the existing works, which learn the video and text representations and their complicated relationships solely based on the pairwise video-text data, we make the first attempt to model the visual consensus by mining the visual concepts from videos and exploiting their co-occurrence patterns within the video and text modalities with no reliance on any additional concept annotations. Specifically, we build a shareable and learnable graph as the visual consensus, where the nodes denoting the mined visual concepts and the edges connecting the nodes representing the co-occurrence relationships between the visual concepts. Extensive experimental results on the public benchmark datasets demonstrate that our proposed method, with the ability to effectively model the visual consensus, achieves state-of-the-art performances on the bidirectional video-text retrieval task. Our code is available at https://github.com/sqiangcao99/VCM.

Downloads

Published

2022-06-28

How to Cite

Cao, S., Wang, B., Zhang, W., & Ma, L. (2022). Visual Consensus Modeling for Video-Text Retrieval. Proceedings of the AAAI Conference on Artificial Intelligence, 36(1), 167-175. https://doi.org/10.1609/aaai.v36i1.19891

Issue

Section

AAAI Technical Track on Computer Vision I