It’s Not Just What You Say, But How You Say It: Muiltimodal Sentiment Analysis Via Crowdsourcing

Authors

  • Ahmad Elshenawy University of Washington
  • Steele Carter University of Washington
  • Daniela Braga Voicebox Technologies

DOI:

https://doi.org/10.1609/hcomp.v3i1.13264

Keywords:

Crowdsourcing, sentiment, multimodal

Abstract

This paper examines the effect of various modalities of expression on the reliability of crowdsourced sentiment polarity judgments. A novel corpus of YouTube video reviews was created, and sentiment judgments were obtained via Amazon Mechanical Turk. We created a system for isolating text, video, and audio modalities from YouTube videos to ensure that annotators could only see the particular modality or modalities being evaluated. Reliability of judgments was assessed using Fleiss Kappa inter-annotator agreement values. We found that the audio only modality produced the most reliable judgments for video fragments and that across modalities video fragments are less ambiguous than full videos.

Downloads

Published

2016-03-28

How to Cite

Elshenawy, A., Carter, S., & Braga, D. (2016). It’s Not Just What You Say, But How You Say It: Muiltimodal Sentiment Analysis Via Crowdsourcing. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 3(1), 9-15. https://doi.org/10.1609/hcomp.v3i1.13264

Issue

Section

Crowdsourcing Breakthroughs for Language Technology Applications Workshop