Weakly Supervised Video Moment Localization with Contrastive Negative Sample Mining


  • Minghang Zheng Peking University
  • Yanjie Huang Peking University
  • Qingchao Chen Peking University
  • Yang Liu Peking University Beijing Institute for General Artificial Intelligence




Computer Vision (CV)


Video moment localization aims at localizing the video segments which are most related to the given free-form natural language query. The weakly supervised setting, where only video level description is available during training, is getting more and more attention due to its lower annotation cost. Prior weakly supervised methods mainly use sliding windows to generate temporal proposals, which are independent of video content and low quality, and train the model to distinguish matched video-query pairs and unmatched ones collected from different videos, while neglecting what the model needs is to distinguish the unaligned segments within the video. In this work, we propose a novel weakly supervised solution by introducing Contrastive Negative sample Mining (CNM). Specifically, we use a learnable Gaussian mask to generate positive samples, highlighting the video frames most related to the query, and consider other frames of the video and the whole video as easy and hard negative samples respectively. We then train our network with the Intra-Video Contrastive loss to make our positive and negative samples more discriminative. Our method has two advantages: (1) Our proposal generation process with a learnable Gaussian mask is more efficient and makes our positive sample higher quality. (2) The more difficult intra-video negative samples enable our model to distinguish highly confusing scenes. Experiments on two datasets show the effectiveness of our method. Code can be found at https://github.com/minghangz/cnm.




How to Cite

Zheng, M., Huang, Y., Chen, Q., & Liu, Y. (2022). Weakly Supervised Video Moment Localization with Contrastive Negative Sample Mining. Proceedings of the AAAI Conference on Artificial Intelligence, 36(3), 3517-3525. https://doi.org/10.1609/aaai.v36i3.20263



AAAI Technical Track on Computer Vision III