Does History Help? An Experiment on How Context Affects Crowdsourcing Dialogue Annotation

Authors

  • Elnaz Nouri University of Southern California

DOI:

https://doi.org/10.1609/hcomp.v1i1.13094

Abstract

Crowds of people can potentially solve some problems faster than individuals. Crowd sourced data can be leveraged to benefit the crowd by providing information or solutions faster than traditional means. Many tasks needed for developing dialogue systems such as annotation can benefit from crowdsourcing as well. We investigate how to outsource dialogue data annotation through Amazon Mechanical Turk. We are in particular interested in empirically analyzing how much context from previous parts of the dialogue (e.g. previous dialogue turns) is needed to be provided before the target part (dialogue turn) is presented to the annotator. The answer to this question is essentially important for leveraging crowd sourced data for appropriate and efficient response and coordination. We study the effect of presenting different numbers of previous data (turns) to the Turkers in annotating sentiments of dyadic negotiation dialogs on the inter annotator reliability and comparison to the gold standard.

Downloads

Published

2013-11-03

How to Cite

Nouri, E. (2013). Does History Help? An Experiment on How Context Affects Crowdsourcing Dialogue Annotation. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 1(1), 6-8. https://doi.org/10.1609/hcomp.v1i1.13094

Issue

Section

Scaling Speech, Language Understanding and Dialogue through Crowdsourcing Workshop