Towards Balanced Alignment: Modal-Enhanced Semantic Modeling for Video Moment Retrieval

Authors

  • Zhihang Liu University of Science and Technology of China
  • Jun Li People's Daily Online
  • Hongtao Xie University of Science and Technology of China
  • Pandeng Li University of Science and Technology of China
  • Jiannan Ge University of Science and Technology of China
  • Sun-Ao Liu University of Science and Technology of China
  • Guoqing Jin People's Daily Online

DOI:

https://doi.org/10.1609/aaai.v38i4.28177

Keywords:

CV: Video Understanding & Activity Analysis, CV: Image and Video Retrieval, CV: Language and Vision

Abstract

Video Moment Retrieval (VMR) aims to retrieve temporal segments in untrimmed videos corresponding to a given language query by constructing cross-modal alignment strategies. However, these existing strategies are often sub-optimal since they ignore the modality imbalance problem, i.e., the semantic richness inherent in videos far exceeds that of a given limited-length sentence. Therefore, in pursuit of better alignment, a natural idea is enhancing the video modality to filter out query-irrelevant semantics, and enhancing the text modality to capture more segment-relevant knowledge. In this paper, we introduce Modal-Enhanced Semantic Modeling (MESM), a novel framework for more balanced alignment through enhancing features at two levels. First, we enhance the video modality at the frame-word level through word reconstruction. This strategy emphasizes the portions associated with query words in frame-level features while suppressing irrelevant parts. Therefore, the enhanced video contains less redundant semantics and is more balanced with the textual modality. Second, we enhance the textual modality at the segment-sentence level by learning complementary knowledge from context sentences and ground-truth segments. With the knowledge added to the query, the textual modality thus maintains more meaningful semantics and is more balanced with the video modality. By implementing two levels of MESM, the semantic information from both modalities is more balanced to align, thereby bridging the modality gap. Experiments on three widely used benchmarks, including the out-of-distribution settings, show that the proposed framework achieves a new start-of-the-art performance with notable generalization ability (e.g., 4.42% and 7.69% average gains of R1@0.7 on Charades-STA and Charades-CG). The code will be available at https://github.com/lntzm/MESM.

Published

2024-03-24

How to Cite

Liu, Z., Li, J., Xie, H., Li, P., Ge, J., Liu, S.-A., & Jin, G. (2024). Towards Balanced Alignment: Modal-Enhanced Semantic Modeling for Video Moment Retrieval. Proceedings of the AAAI Conference on Artificial Intelligence, 38(4), 3855-3863. https://doi.org/10.1609/aaai.v38i4.28177

Issue

Section

AAAI Technical Track on Computer Vision III