Identifying and Accounting for Task-Dependent Bias in Crowdsourcing

Authors

  • Ece Kamar Microsoft Research
  • Ashish Kapoor Microsoft Research
  • Eric Horvitz Microsoft Research

DOI:

https://doi.org/10.1609/hcomp.v3i1.13238

Keywords:

crowdsourcing, human computation, Bayesian graphical models, bias, aggregation models

Abstract

Models for aggregating contributions by crowd workers have been shown to be challenged by the rise of task-specific biases or errors. Task-dependent errors in assessment may shift the majority opinion of even large numbers of workers to an incorrect answer. We introduce and evaluate probabilistic models that can detect and correct task-dependent bias automatically. First, we show how to build and use probabilistic graphical models for jointly modeling task features, workers' biases, worker contributions and ground truth answers of tasks so that task-dependent bias can be corrected. Second, we show how the approach can perform a type of transfer learning among workers to address the issue of annotation sparsity. We evaluate the models with varying complexity on a large data set collected from a citizen science project and show that the models are effective at correcting the task-dependent worker bias. Finally, we investigate the use of active learning to guide the acquisition of expert assessments to enable automatic detection and correction of worker bias.

Downloads

Published

2015-09-23

How to Cite

Kamar, E., Kapoor, A., & Horvitz, E. (2015). Identifying and Accounting for Task-Dependent Bias in Crowdsourcing. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 3(1), 92-101. https://doi.org/10.1609/hcomp.v3i1.13238