GMMFormer: Gaussian-Mixture-Model Based Transformer for Efficient Partially Relevant Video Retrieval

Authors

  • Yuting Wang Tsinghua Shenzhen International Graduate School, Tsinghua University Research Center of Artificial Intelligence, Peng Cheng Laboratory
  • Jinpeng Wang Tsinghua Shenzhen International Graduate School, Tsinghua University Research Center of Artificial Intelligence, Peng Cheng Laboratory
  • Bin Chen Harbin Institute of Technology, Shenzhen Research Center of Artificial Intelligence, Peng Cheng Laboratory
  • Ziyun Zeng Tsinghua Shenzhen International Graduate School, Tsinghua University Research Center of Artificial Intelligence, Peng Cheng Laboratory
  • Shu-Tao Xia Tsinghua Shenzhen International Graduate School, Tsinghua University Research Center of Artificial Intelligence, Peng Cheng Laboratory

DOI:

https://doi.org/10.1609/aaai.v38i6.28389

Keywords:

CV: Image and Video Retrieval

Abstract

Given a text query, partially relevant video retrieval (PRVR) seeks to find untrimmed videos containing pertinent moments in a database. For PRVR, clip modeling is essential to capture the partial relationship between texts and videos. Current PRVR methods adopt scanning-based clip construction to achieve explicit clip modeling, which is information-redundant and requires a large storage overhead. To solve the efficiency problem of PRVR methods, this paper proposes GMMFormer, a Gaussian-Mixture-Model based Transformer which models clip representations implicitly. During frame interactions, we incorporate Gaussian-Mixture-Model constraints to focus each frame on its adjacent frames instead of the whole video. Then generated representations will contain multi-scale clip information, achieving implicit clip modeling. In addition, PRVR methods ignore semantic differences between text queries relevant to the same video, leading to a sparse embedding space. We propose a query diverse loss to distinguish these text queries, making the embedding space more intensive and contain more semantic information. Extensive experiments on three large-scale video datasets (i.e., TVR, ActivityNet Captions, and Charades-STA) demonstrate the superiority and efficiency of GMMFormer.

Published

2024-03-24

How to Cite

Wang, Y., Wang, J., Chen, B., Zeng, Z., & Xia, S.-T. (2024). GMMFormer: Gaussian-Mixture-Model Based Transformer for Efficient Partially Relevant Video Retrieval. Proceedings of the AAAI Conference on Artificial Intelligence, 38(6), 5767-5775. https://doi.org/10.1609/aaai.v38i6.28389

Issue

Section

AAAI Technical Track on Computer Vision V