Performance of Paid and Volunteer Image Labeling in Citizen Science — A Retrospective Analysis

Authors

  • Kutub Gandhi Northeastern University
  • Sofia Eleni Spatharioti Microsoft Research
  • Scott Eustis Healthy Gulf
  • Sara Wylie Northeastern University
  • Seth Cooper Northeastern University

DOI:

https://doi.org/10.1609/hcomp.v10i1.21988

Keywords:

Mechanical Turk, Volunteers, Crowdsourcing, Image Labeling, Citizen Science, Environmental Study

Abstract

Citizen science projects that rely on human computation can attempt to solicit volunteers or use paid microwork platforms such as Amazon Mechanical Turk. To better understand these approaches, this paper analyzes crowdsourced image label data sourced from an environmental justice project looking at wetland loss off the coast of Louisiana. This retrospective analysis identifies key differences between the two populations: while Mechanical Turk workers are accessible, cost-efficient, and rate more images than volunteers (on average), their labels are of lower quality, whereas volunteers can achieve high accuracy with comparably few votes. Volunteer organizations can also interface with the educational or outreach goals of an organization in ways that the limited context of microwork prevents.

Downloads

Published

2022-10-14

How to Cite

Gandhi, K., Spatharioti, S. E., Eustis, S., Wylie, S., & Cooper, S. (2022). Performance of Paid and Volunteer Image Labeling in Citizen Science — A Retrospective Analysis. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 10(1), 64-73. https://doi.org/10.1609/hcomp.v10i1.21988