Crowdsourcing Paper Screening in Systematic Literature Reviews

Authors

  • Evgeny Krivosheev University of Trento
  • Fabio Casati University of Trento and Tomsk Polytechnic University
  • Valentina Caforio University of Trento
  • Boualem Benatallah University of New South Wales

DOI:

https://doi.org/10.1609/hcomp.v5i1.13302

Keywords:

crowdsourcing, human computation, classification, systematic reviews

Abstract

Literature reviews allow scientists to stand on the shoulders of giants, showing promising directions, summarizing progress, and pointing out existing challenges in research. At the same time conducting a systematic literature review is a laborious and consequently expensive process. In the last decade, there have been several studies on crowdsourcing in literature reviews. This paper explores the feasibility of crowdsourcing for facilitating the literature review process in terms of results, time and effort, and identifies which crowdsourcing strategies provide the best results based on the budget available. In particular we focus on the screening phase of the literature review process and we contribute and assess strategies for running crowdsourcing tasks that are efficient in terms of budget and classification error. Finally, we present our findings based on experiments run on Crowdflower.

Downloads

Published

2017-09-21

How to Cite

Krivosheev, E., Casati, F., Caforio, V., & Benatallah, B. (2017). Crowdsourcing Paper Screening in Systematic Literature Reviews. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 5(1), 108-117. https://doi.org/10.1609/hcomp.v5i1.13302