Performance of Paid and Volunteer Image Labeling in Citizen Science — A Retrospective Analysis
Keywords:Mechanical Turk, Volunteers, Crowdsourcing, Image Labeling, Citizen Science, Environmental Study
AbstractCitizen science projects that rely on human computation can attempt to solicit volunteers or use paid microwork platforms such as Amazon Mechanical Turk. To better understand these approaches, this paper analyzes crowdsourced image label data sourced from an environmental justice project looking at wetland loss off the coast of Louisiana. This retrospective analysis identifies key differences between the two populations: while Mechanical Turk workers are accessible, cost-efficient, and rate more images than volunteers (on average), their labels are of lower quality, whereas volunteers can achieve high accuracy with comparably few votes. Volunteer organizations can also interface with the educational or outreach goals of an organization in ways that the limited context of microwork prevents.
How to Cite
Gandhi, K., Spatharioti, S. E., Eustis, S., Wylie, S., & Cooper, S. (2022). Performance of Paid and Volunteer Image Labeling in Citizen Science — A Retrospective Analysis. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 10(1), 64-73. https://doi.org/10.1609/hcomp.v10i1.21988
Full Archival Papers