Capturing Dependencies among Labels and Features for Multiple Emotion Tagging of Multimedia Data

Authors

  • Shan Wu University of Science and Technology of China
  • Shangfei Wang University of Science and Technology of China
  • Qiang Ji Rensselaer Polytechnic Institute

DOI:

https://doi.org/10.1609/aaai.v31i1.10629

Abstract

In this paper, we tackle the problem of emotion tagging of multimedia data by modeling the dependencies among multiple emotions in both the feature and label spaces. These dependencies, which carry crucial top-down and bottom-up evidence for improving multimedia affective content analysis, have not been thoroughly exploited yet. To this end, we propose two hierarchical models that independently and dependently learn the shared features and global semantic relationships among emotion labels to jointly tag multiple emotion labels of multimedia data. Efficient learning and inference algorithms of the proposed models are also developed. Experiments on three benchmark emotion databases demonstrate the superior performance of our methods to existing methods.

Downloads

Published

2017-02-12

How to Cite

Wu, S., Wang, S., & Ji, Q. (2017). Capturing Dependencies among Labels and Features for Multiple Emotion Tagging of Multimedia Data. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10629