Quality Expectation-Variance Tradeoffs in Crowdsourcing Contests

Authors

  • Xi Gao Harvard University
  • Yoram Bachrach Microsoft Research
  • Peter Key Microsoft Research
  • Thore Graepel Microsoft Research

DOI:

https://doi.org/10.1609/aaai.v26i1.8098

Keywords:

crowdsourcing contest, all-pay auction, expectation-variance tradeoff, peer prediction

Abstract

We examine designs for crowdsourcing contests, where participants compete for rewards given to superior solutions of a task. We theoretically analyze tradeoffs between the expectation and variance of the principal's utility (i.e. the best solution's quality), and empirically test our theoretical predictions using a controlled experiment on Amazon Mechanical Turk. Our evaluation method is also crowdsourcing based and relies on the peer prediction mechanism. Our theoretical analysis shows an expectation-variance tradeoff of the principal's utility in such contests through a Pareto efficient frontier. In particular, we show that the simple contest with 2 authors and the 2-pair contest have good theoretical properties. In contrast, our empirical results show that the 2-pair contest is the superior design among all designs tested, achieving the highest expectation and lowest variance of the principal's utility.

Downloads

Published

2021-09-20

How to Cite

Gao, X., Bachrach, Y., Key, P., & Graepel, T. (2021). Quality Expectation-Variance Tradeoffs in Crowdsourcing Contests. Proceedings of the AAAI Conference on Artificial Intelligence, 26(1), 38-44. https://doi.org/10.1609/aaai.v26i1.8098