A Human-Centered Framework for Ensuring Reliability on Crowdsourced Labeling Tasks

Authors

  • Omar Alonso Microsoft Corporation
  • Catherine Marshall Microsoft Corporation
  • Marc Najork Microsoft Corporation

DOI:

https://doi.org/10.1609/hcomp.v1i1.13097

Keywords:

crowdsourcing, label quality, experimental design, CAPTCHA

Abstract

This paper describes an approach to improving the reliability of a crowdsourced labeling task for which there is no objective right answer. Our approach focuses on three contingent elements of the labeling task: data quality, worker reliability, and task design. We describe how we developed and applied this framework to the task of labeling tweets according to their interestingness. We use in-task CAPTCHAs to identify unreliable workers, and measure inter-rater agreement to decide whether subtasks have objective or merely subjective answers.

Downloads

Published

2013-11-03

How to Cite

Alonso, O., Marshall, C., & Najork, M. (2013). A Human-Centered Framework for Ensuring Reliability on Crowdsourced Labeling Tasks. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 1(1), 2-3. https://doi.org/10.1609/hcomp.v1i1.13097