See, Rank, and Filter: Important Word-Aware Clip Filtering via Scene Understanding for Moment Retrieval and Highlight Detection

Authors

  • YuEun Lee Kyung Hee University
  • Jung Uk Kim Kyung Hee University

DOI:

https://doi.org/10.1609/aaai.v40i8.37516

Abstract

Video moment retrieval (MR) and highlight detection (HD) with natural language queries aim to localize relevant moments and key highlights in a video clips. However, existing methods overlook the importance of individual words, treating the entire text query and video clips as a black-box, which hinders contextual understanding. In this paper, we propose a novel approach that enables fine-grained clip filtering by identifying and prioritizing important words in the query. Our method integrates image-text scene understanding through Multimodal Large Language Models (MLLMs) and enhances the semantic understanding of video clips. We introduce a feature enhancement module (FEM) to capture important words from the query and a ranking-based filtering module (RFM) to iteratively refine video clips based on their relevance to these important words. Extensive experiments demonstrate that our approach significantly outperforms existing state-of-the-art methods, achieving superior performance in both MR and HD tasks.

Published

2026-03-14

How to Cite

Lee, Y., & Kim, J. U. (2026). See, Rank, and Filter: Important Word-Aware Clip Filtering via Scene Understanding for Moment Retrieval and Highlight Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 40(8), 5936–5944. https://doi.org/10.1609/aaai.v40i8.37516

Issue

Section

AAAI Technical Track on Computer Vision V