Investigating the Influence of Data Familiarity to Improve the Design of a Crowdsourcing Image Annotation System

Authors

  • Danna Gurari University of Texas at Austin
  • Mehrnoosh Sameki Boston University
  • Margrit Betke Boston University

DOI:

https://doi.org/10.1609/hcomp.v4i1.13294

Keywords:

Image Annotation, Crowdsourcing, Data Familiarity

Abstract

Crowdsourced demarcations of object boundaries in images (segmentations) are important for many vision-based applications. A commonly reported challenge is that a large percentage of crowd results are discarded due to concerns about quality. We conducted three studies to examine (1) how does the quality of crowdsourced segmentations differ for familiar everyday images versus unfamiliar biomedical images?, (2) how does making familiar images less recognizable (rotating images upside down) influence crowd work with respect to the quality of results, segmentation time, and segmentation detail?, and (3) how does crowd workers’ judgments of the ambiguity of the segmentation task, collected by voting, differ for familiar everyday images and unfamiliar biomedical images? We analyzed a total of 2,525 segmentations collected from 121 crowd workers and 1,850 votes from 55 crowd workers. Our results illustrate the potential benefit of explicitly accounting for human familiarity with the data when designing computer interfaces for human interaction.

Downloads

Published

2016-09-21

How to Cite

Gurari, D., Sameki, M., & Betke, M. (2016). Investigating the Influence of Data Familiarity to Improve the Design of a Crowdsourcing Image Annotation System. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 4(1), 59-68. https://doi.org/10.1609/hcomp.v4i1.13294