Reducing Error in Context-Sensitive Crowdsourced Tasks


  • Daniel Haas University of California, Berkeley
  • Matthew Greenstein Locu, Inc.
  • Kainar Kamalov Locu, Inc.
  • Adam Marcus Locu, Inc.
  • Marek Olszewski Locu, Inc.
  • Marc Piette Luxo, Inx.



quality control, machine learning, beyond microtasks


Most research in quality control in crowdsourced workflows has focused on microtasks, wherein quality can be improved by assigning tasks to multiple workers and interpreting the output as a function of workers' agreement. Not all work fits into microtask frameworks, however, especially work that requires significant training or time per task. In such a context-heavy crowd work system with limited budget for task redundancy, we propose three novel techniques for reducing task error: (1) A self-policing crowd hierarchy in which trusted workers review, correct, and improve entry-level workers' output (2) predictive modeling of task error that improves data quality through targeted redundancy, and (3) holistic modeling of worker performance that supports crowd management strategies designed to improve average crowd worker quality and allocate training to the workers that need the most assistance.




How to Cite

Haas, D., Greenstein, M., Kamalov, K., Marcus, A., Olszewski, M., & Piette, M. (2013). Reducing Error in Context-Sensitive Crowdsourced Tasks. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 1(1), 28-29.