Dense Events Grounding in Video

Authors

  • Peijun Bao Peking University, China
  • Qian Zheng Nanyang Technological University, Singapore
  • Yadong Mu Peking University, China

DOI:

https://doi.org/10.1609/aaai.v35i2.16175

Keywords:

Language and Vision

Abstract

This paper explores a novel setting of temporal sentence grounding for the first time, dubbed as dense events grounding. Given an untrimmed video and a paragraph description, dense events grounding aims to jointly localize temporal moments of multiple events described in the paragraph. Our main motivating fact is that multiple events to be grounded in a video are often semantically related and temporally coordinated according to their order appearing in the paragraph. This fact sheds light on devising more accurate visual grounding model. In this work, we propose Dense Events Propagation Network (DepNet) for this novel task. DepNet first adaptively aggregates temporal and semantic information of dense events into a compact set through a second-order attention pooling, then selectively propagates the aggregated information to each single event with soft attention. Based on such aggregation-and-propagation mechanism, DepNet can effectively exploit both the temporal order and semantic relations of dense events. We conduct comprehensive experiments on large-scale datasets ActivityNet Captions and TACoS. For fair comparisons, our evaluations include both state-of-art single-event grounding methods and their natural extensions to the dense-events grounding setting implemented by us. All experiments clearly shows the performance superiority of the proposed DepNet by significant margins.

Downloads

Published

2021-05-18

How to Cite

Bao, P., Zheng, Q., & Mu, Y. (2021). Dense Events Grounding in Video. Proceedings of the AAAI Conference on Artificial Intelligence, 35(2), 920-928. https://doi.org/10.1609/aaai.v35i2.16175

Issue

Section

AAAI Technical Track on Computer Vision I