An Empirical Study on Short- and Long-Term Effects of Self-Correction in Crowdsourced Microtasks

Authors

  • Masaki Kobayashi University of Tsukuba
  • Hiromi Morita University of Tsukuba
  • Masaki Matsubara University of Tsukuba
  • Nobuyuki Shimizu Yahoo! Japan
  • Atsuyuki Morishima University of Tsukuba

DOI:

https://doi.org/10.1609/hcomp.v6i1.13324

Keywords:

Crowdsourcing, Quality Control, Crowd Worker

Abstract

Self-correction for crowdsourced tasks is a two-stage setting that allows a crowd worker to review the task results of other workers; the worker is then given a chance to update his/her results according to the review.Self-correction was proposed as an approach complementary to statistical algorithms in which workers independently perform the same task. It can provide higher-quality results with few additional costs. However, thus far, the effects have only been demonstrated in simulations, and empirical evaluations are needed. In addition, as self-correction gives feedback to workers, an interesting question arises: whether perceptual learning is observed in self-correction tasks. This paper reports our experimental results on self-corrections with a real-world crowdsourcing service.The empirical results show the following: (1) Self-correction is effective for making workers reconsider their judgments. (2) Self-correction is more effective if workers are shown task results produced by higher-quality workers during the second stage. (3) Perceptual learning effect is observed in some cases. Self-correction can give feedback that shows workers how to provide high-quality answers in future tasks.The findings imply that we can construct a positive loop to improve the quality of workers effectively.We also analyze in which cases perceptual learning can be observed with self-correction in crowdsourced microtasks.

Downloads

Published

2018-06-15

How to Cite

Kobayashi, M., Morita, H., Matsubara, M., Shimizu, N., & Morishima, A. (2018). An Empirical Study on Short- and Long-Term Effects of Self-Correction in Crowdsourced Microtasks. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 6(1), 79-87. https://doi.org/10.1609/hcomp.v6i1.13324