Interactive Consensus Agreement Games for Labeling Images


  • Paul Upchurch Cornell University
  • Daniel Sedra Cornell University
  • Andrew Mullen Cornell University
  • Haym Hirsh Cornell University
  • Kavita Bala Cornell University



crowdsourcing, human computation


Scene understanding algorithms in computer vision are improving dramatically by training deep convolutional neural networks on millions of accurately annotated images. Collecting large-scale datasets for this kind of training is challenging, and the learning algorithms are only as good as the data they train on. Training annotations are often obtained by taking the majority label from independent crowdsourced workers using platforms such as Amazon Mechanical Turk. However, the accuracy of the resulting annotations can vary, with the hardest-to-annotate samples having prohibitively low accuracy. Our insight is that in cases where independent worker annotations are poor more accurate results can be obtained by having workers collaborate. This paper introduces consensus agreement games, a novel method for assigning annotations to images by the agreement of multiple consensuses of small cliques of workers. We demonstrate that this approach reduces error by 37.8% on two different datasets at a cost of $0.10 or $0.17 per annotation. The higher cost is justified because our method does not need to be run on the entire dataset. Ultimately, our method enables us to more accurately annotate images and build more challenging training datasets for learning algorithms.




How to Cite

Upchurch, P., Sedra, D., Mullen, A., Hirsh, H., & Bala, K. (2016). Interactive Consensus Agreement Games for Labeling Images. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 4(1), 239-248.