HiveMind: Tuning Crowd Response with a Single Value

Authors

  • Preetjot Singh Northwestern University
  • Walter Lasecki University of Rochester
  • Paulo Barelli University of Rochester
  • Jeffrey Bigham Carnegie Mellon University and University of Rochester

DOI:

https://doi.org/10.1609/hcomp.v1i1.13130

Keywords:

human computation, incentives, human computer interaction, mechanism design, game theory, incentive model

Abstract

One common problem plaguing crowdsourcing tasks is tuning the set of worker responses: Depending on task requirements, requesters may want a large set of rich and varied worker responses (typically in subjective evaluation tasks) or a more convergent response-set (typically for more objective tasks such as fact-checking). This problem is especially salient in tasks that combine workers’ responses to present a single output: Divergence in these settings could either add richness and complexity to the unified answer, or noise. In this paper we present HiveMind, a system of methods that allow requesters to tune different levels of convergence in worker participation for different tasks simply by adjusting the value of one variable.

Downloads

Published

2013-11-03

How to Cite

Singh, P., Lasecki, W., Barelli, P., & Bigham, J. (2013). HiveMind: Tuning Crowd Response with a Single Value. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 1(1), 66-67. https://doi.org/10.1609/hcomp.v1i1.13130