Predicting Emotions in User-Generated Videos

Authors

  • Yu-Gang Jiang Fudan University, Shanghai
  • Baohan Xu Fudan University, Shanghai
  • Xiangyang Xue Fudan University, Shanghai

DOI:

https://doi.org/10.1609/aaai.v28i1.8724

Abstract

User-generated video collections are expanding rapidly in recent years, and systems for automatic analysis of these collections are in high demands. While extensive research efforts have been devoted to recognizing semantics like "birthday party" and "skiing", little attempts have been made to understand the emotions carried by the videos, e.g., "joy" and "sadness". In this paper, we propose a comprehensive computational framework for predicting emotions in user-generated videos. We first introduce a rigorously designed dataset collected from popular video-sharing websites with manual annotations, which can serve as a valuable benchmark for future research. A large set of features are extracted from this dataset, ranging from popular low-level visual descriptors, audio features, to high-level semantic attributes. Results of a comprehensive set of experiments indicate that combining multiple types of features---such as the joint use of the audio and visual clues---is important, and attribute features such as those containing sentiment-level semantics are very effective.

Downloads

Published

2014-06-19

How to Cite

Jiang, Y.-G., Xu, B., & Xue, X. (2014). Predicting Emotions in User-Generated Videos. Proceedings of the AAAI Conference on Artificial Intelligence, 28(1). https://doi.org/10.1609/aaai.v28i1.8724