Contrastive Masked Autoencoders for Self-Supervised Video Hashing

Authors

  • Yuting Wang Tsinghua Shenzhen International Graduate School, Tsinghua University Research Center of Artificial Intelligence, Peng Cheng Laboratory
  • Jinpeng Wang Tsinghua Shenzhen International Graduate School, Tsinghua University Research Center of Artificial Intelligence, Peng Cheng Laboratory
  • Bin Chen Harbin Institute of Technology, Shenzhen
  • Ziyun Zeng Tsinghua Shenzhen International Graduate School, Tsinghua University Research Center of Artificial Intelligence, Peng Cheng Laboratory
  • Shu-Tao Xia Tsinghua Shenzhen International Graduate School, Tsinghua University Research Center of Artificial Intelligence, Peng Cheng Laboratory

DOI:

https://doi.org/10.1609/aaai.v37i3.25373

Keywords:

CV: Image and Video Retrieval

Abstract

Self-Supervised Video Hashing (SSVH) models learn to generate short binary representations for videos without ground-truth supervision, facilitating large-scale video retrieval efficiency and attracting increasing research attention. The success of SSVH lies in the understanding of video content and the ability to capture the semantic relation among unlabeled videos. Typically, state-of-the-art SSVH methods consider these two points in a two-stage training pipeline, where they firstly train an auxiliary network by instance-wise mask-and-predict tasks and secondly train a hashing model to preserve the pseudo-neighborhood structure transferred from the auxiliary network. This consecutive training strategy is inflexible and also unnecessary. In this paper, we propose a simple yet effective one-stage SSVH method called ConMH, which incorporates video semantic information and video similarity relationship understanding in a single stage. To capture video semantic information for better hashing learning, we adopt an encoder-decoder structure to reconstruct the video from its temporal-masked frames. Particularly, we find that a higher masking ratio helps video understanding. Besides, we fully exploit the similarity relationship between videos by maximizing agreement between two augmented views of a video, which contributes to more discriminative and robust hash codes. Extensive experiments on three large-scale video datasets (i.e., FCVID, ActivityNet and YFCC) indicate that ConMH achieves state-of-the-art results. Code is available at https://github.com/huangmozhi9527/ConMH.

Downloads

Published

2023-06-26

How to Cite

Wang, Y., Wang, J., Chen, B., Zeng, Z., & Xia, S.-T. (2023). Contrastive Masked Autoencoders for Self-Supervised Video Hashing. Proceedings of the AAAI Conference on Artificial Intelligence, 37(3), 2733-2741. https://doi.org/10.1609/aaai.v37i3.25373

Issue

Section

AAAI Technical Track on Computer Vision III