Deep Reinforcement Learning for Unsupervised Video Summarization With Diversity-Representativeness Reward

Authors

  • Kaiyang Zhou Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences; Queen Mary University of London
  • Yu Qiao Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
  • Tao Xiang Queen Mary University of London

DOI:

https://doi.org/10.1609/aaai.v32i1.12255

Keywords:

Video Summarization, Reinforcement Learning

Abstract

Video summarization aims to facilitate large-scale video browsing by producing short, concise summaries that are diverse and representative of original videos. In this paper, we formulate video summarization as a sequential decision-making process and develop a deep summarization network (DSN) to summarize videos. DSN predicts for each video frame a probability, which indicates how likely a frame is selected, and then takes actions based on the probability distributions to select frames, forming video summaries. To train our DSN, we propose an end-to-end, reinforcement learning-based framework, where we design a novel reward function that jointly accounts for diversity and representativeness of generated summaries and does not rely on labels or user interactions at all. During training, the reward function judges how diverse and representative the generated summaries are, while DSN strives for earning higher rewards by learning to produce more diverse and more representative summaries. Since labels are not required, our method can be fully unsupervised. Extensive experiments on two benchmark datasets show that our unsupervised method not only outperforms other state-of-the-art unsupervised methods, but also is comparable to or even superior than most of published supervised approaches.

Downloads

Published

2018-04-27

How to Cite

Zhou, K., Qiao, Y., & Xiang, T. (2018). Deep Reinforcement Learning for Unsupervised Video Summarization With Diversity-Representativeness Reward. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.12255