An Efficient Framework for Dense Video Captioning

Authors

  • Maitreya Suin Indian Institute of Technology Madras
  • A. N. Rajagopalan Indian Institute of Technology Madras

DOI:

https://doi.org/10.1609/aaai.v34i07.6881

Abstract

Dense video captioning is an extremely challenging task since an accurate and faithful description of events in a video requires a holistic knowledge of the video contents as well as contextual reasoning of individual events. Most existing approaches handle this problem by first proposing event boundaries from a video and then captioning on a subset of the proposals. Generation of dense temporal annotations and corresponding captions from long videos can be dramatically source consuming. In this paper, we focus on the task of generating a dense description of temporally untrimmed videos and aim to significantly reduce the computational cost by processing fewer frames while maintaining accuracy. Existing video captioning methods sample frames with a predefined frequency over the entire video or use all the frames. Instead, we propose a deep reinforcement-based approach which enables an agent to describe multiple events in a video by watching a portion of the frames. The agent needs to watch more frames when it is processing an informative part of the video, and skip frames when there is redundancy. The agent is trained using actor-critic algorithm, where the actor determines the frames to be watched from a video and the critic assesses the optimality of the decisions taken by the actor. Such an efficient frame selection simplifies the event proposal task considerably. This has the added effect of reducing the occurrence of unwanted proposals. The encoded state representation of the frame selection agent is further utilized for guiding event proposal and caption generation tasks. We also leverage the idea of knowledge distillation to improve the accuracy. We conduct extensive evaluations on ActivityNet captions dataset to validate our method.

Downloads

Published

2020-04-03

How to Cite

Suin, M., & Rajagopalan, A. N. (2020). An Efficient Framework for Dense Video Captioning. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 12039-12046. https://doi.org/10.1609/aaai.v34i07.6881

Issue

Section

AAAI Technical Track: Vision