Video Summarization via Semantic Attended Networks

Authors

  • Huawei Wei Shanghai Jiao Tong University
  • Bingbing Ni Shanghai Jiao Tong University
  • Yichao Yan Shanghai Jiao Tong University
  • Huanyu Yu Shanghai Jiao Tong University
  • Xiaokang Yang Shanghai Jiao Tong University
  • Chen Yao The Third Institute of Ministry of Public Security

DOI:

https://doi.org/10.1609/aaai.v32i1.11297

Abstract

The goal of video summarization is to distill a raw video into a more compact form without losing much semantic information. However, previous methods mainly consider the diversity and representation interestingness of the obtained summary, and they seldom pay sufficient attention to semantic information of resulting frame set, especially the long temporal range semantics. To explicitly address this issue, we propose a novel technique which is able to extract the most semantically relevant video segments (i.e., valid for a long term temporal duration) and assemble them into an informative summary. To this end, we develop a semantic attended video summarization network (SASUM) which consists of a frame selector and video descriptor to select an appropriate number of video shots by minimizing the distance between the generated description sentence of the summarized video and the human annotated text of the original video. Extensive experiments show that our method achieves a superior performance gain over previous methods on two benchmark datasets.

Downloads

Published

2018-04-25

How to Cite

Wei, H., Ni, B., Yan, Y., Yu, H., Yang, X., & Yao, C. (2018). Video Summarization via Semantic Attended Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11297