Modeling Task Complexity in Crowdsourcing

Authors

  • Jie Yang Delft University of Technology
  • Judith Redi Delft University of Technology
  • Gianluca Demartini University of Sheffield
  • Alessandro Bozzon Delft University of Technology

DOI:

https://doi.org/10.1609/hcomp.v4i1.13283

Keywords:

Crowdsourcing, Complexity, Market

Abstract

Complexity is crucial to characterize tasks performed by humans through computer systems. Yet, the theory and practice of crowdsourcing currently lacks a clear understanding of task complexity, hindering the design of effective and efficient execution interfaces or fair monetary rewards.  To understand how complexity is perceived and distributed over crowdsourcing tasks, we instrumented an experiment where we asked workers to evaluate the complexity of 61 real-world re-instantiated crowdsourcing tasks. We show that task complexity, while being subjective, is coherently perceived across workers; on the other hand, it is significantly influenced by task type. Next, we develop a high-dimensional regression model, to assess the influence of three classes of structural features (metadata, content, and visual) on task complexity, and ultimately use them to measure task complexity. Results show that both the appearance and the language used in task description can accurately predict task complexity. Finally, we apply the same feature set to predict task performance, based on a set of 5 years-worth tasks in Amazon MTurk. Results show that features related to task complexity can improve the quality of task performance prediction, thus demonstrating the utility of complexity as a task modeling property.

Downloads

Published

2016-09-21

How to Cite

Yang, J., Redi, J., Demartini, G., & Bozzon, A. (2016). Modeling Task Complexity in Crowdsourcing. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 4(1), 249-258. https://doi.org/10.1609/hcomp.v4i1.13283