Query-centric Audio-Visual Cognition Network for Moment Retrieval, Segmentation and Step-Captioning

Authors

  • Yunbin Tu School of Computer Science and Technology, University of Chinese Academy of Science
  • Liang Li Key Laboratory of AI Safety of CAS, Institute of Computing Technology, Chinese Academy of Sciences School of Computer Science and Technology, University of Chinese Academy of Science
  • Li Su School of Computer Science and Technology, University of Chinese Academy of Science Peng Cheng Laboratory
  • Qingming Huang School of Computer Science and Technology, University of Chinese Academy of Science

DOI:

https://doi.org/10.1609/aaai.v39i7.32803

Abstract

Video has emerged as a favored multimedia format on the internet. To better gain video contents, a new topic HIREST is presented, including video retrieval, moment retrieval, moment segmentation, and step-captioning. The pioneering work chooses the pre-trained CLIP-based model for video retrieval, and leverages it as a feature extractor for other three challenging tasks solved in a multi-task learning paradigm. Nevertheless, this work struggles to learn the comprehensive cognition of user-preferred content, due to disregarding the hierarchies and association relations across modalities. In this paper, guided by the shallow-to-deep principle, we propose a query-centric audio-visual cognition (QUAG) network to construct a reliable multi-modal representation for moment retrieval, segmentation and step-captioning. Specifically, we first design the modality-synergistic perception to obtain rich audio-visual content, by modeling global contrastive alignment and local fine-grained interaction between visual and audio modalities. Then, we devise the query-centric cognition that uses the deep-level query to perform the temporal-channel filtration on the shallow-level audio-visual representation. This can cognize user-preferred content and thus attain a query-centric audio-visual representation for three tasks. Extensive experiments show QUAG achieves the SOTA results on HIREST. Further, we test QUAG on the query-based video summarization task and verify its good generalization.

Downloads

Published

2025-04-11

How to Cite

Tu, Y., Li, L., Su, L., & Huang, Q. (2025). Query-centric Audio-Visual Cognition Network for Moment Retrieval, Segmentation and Step-Captioning. Proceedings of the AAAI Conference on Artificial Intelligence, 39(7), 7464–7472. https://doi.org/10.1609/aaai.v39i7.32803

Issue

Section

AAAI Technical Track on Computer Vision VI