Confusing the Crowd: Task Instruction Quality on Amazon Mechanical Turk

Authors

  • Meng-Han Wu Purdue University
  • Alexander Quinn Purdue University

DOI:

https://doi.org/10.1609/hcomp.v5i1.13317

Keywords:

crowdsourcing, human computation, human-computer interaction

Abstract

Task instruction quality is widely presumed to affect outcomes, such as accuracy, throughput, trust, and worker satisfaction. Best practices guides written by experienced requesters share their advice about how to craft task interfaces. However, there is little evidence of how specific task design attributes affect actual outcomes. This paper presents a set of studies that expose the relationship between three sets of measures: (a) workers’ perceptions of task quality, (b) adherence to popular best practices, and (c) actual outcomes when tasks are posted (including accuracy, throughput, trust, and worker satisfaction). These were investigated using collected task interfaces, along with a model task that we systematically mutated to test the effects of specific task design guidelines.

Downloads

Published

2017-09-21

How to Cite

Wu, M.-H., & Quinn, A. (2017). Confusing the Crowd: Task Instruction Quality on Amazon Mechanical Turk. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 5(1), 206-215. https://doi.org/10.1609/hcomp.v5i1.13317