Optimal Worker Quality and Answer Estimates in Crowd-Powered Filtering and Rating

Authors

  • Akash Das Sarma Stanford University
  • Aditya Parameswaran University of Illinois (UIUC)
  • Jennifer Widom Stanford University

Keywords:

crowdsourcing, crowd algorithms, filtering, rating, maximum likelihood

Abstract

We consider the problem of optimally filtering (or rating) a set of items based on predicates (or scoring) requiring human evaluation. Filtering and rating are ubiquitous problems across crowdsourcing applications. We consider the setting where we are given a set of items and a set of worker responses for each item: yes/no in the case of filtering and an integer value in the case of rating. We assume that items have a true inherent value that is unknown, and workers draw their responses from a common, but hidden, error distribution. Our goal is to simultaneously assign a ground truth to the item-set and estimate the worker error distribution. Previous work in this area has focused on heuristics such as Expectation Maximization (EM), providing only a local optima guarantee, while we have developed a general framework that finds a maximum likelihood solution. Our approach extends to a number of variations on the filtering and rating problems.

Downloads

Published

2014-09-05

How to Cite

Das Sarma, A., Parameswaran, A., & Widom, J. (2014). Optimal Worker Quality and Answer Estimates in Crowd-Powered Filtering and Rating. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 2(1). Retrieved from https://ojs.aaai.org/index.php/HCOMP/article/view/13187