Leveraging Crowdsourcing to Detect Improper Tasks in Crowdsourcing Marketplaces

Authors

  • Yukino Baba The University of Tokyo
  • Hisashi Kashima The University of Tokyo
  • Kei Kinoshita Lancers Inc.
  • Goushi Yamaguchi Lancers Inc.
  • Yosuke Akiyoshi Lancers Inc.

DOI:

https://doi.org/10.1609/aaai.v27i2.18987

Abstract

Controlling the quality of tasks is a major challenge in crowdsourcing marketplaces. Most of the existing crowdsourcing services prohibit requesters from posting illegal or objectionable tasks. Operators in the marketplaces have to monitor the tasks continuously to find such improper tasks; however, it is too expensive to manually investigate each task. In this paper, we present the reports of our trial study on automatic detection of improper tasks to support the monitoring of activities by marketplace operators. We perform experiments using real task data from a commercial crowdsourcing marketplace and show that the classifier trained by the operator judgments achieves high accuracy in detecting improper tasks. In addition, to reduce the annotation costs of the operator and improve the classification accuracy, we consider the use of crowdsourcing for task annotation. We hire a group of crowdsourcing (non-expert) workers to monitor posted tasks, and incorporate their judgments into the training data of the classifier. By applying quality control techniques to handle the variability in worker reliability, our results show that the use of non-expert judgments by crowdsourcing workers in combination with expert judgments improves the accuracy of detecting improper crowdsourcing tasks.

Downloads

Published

2021-10-06

How to Cite

Baba, Y., Kashima, H., Kinoshita, K., Yamaguchi, G., & Akiyoshi, Y. (2021). Leveraging Crowdsourcing to Detect Improper Tasks in Crowdsourcing Marketplaces. Proceedings of the AAAI Conference on Artificial Intelligence, 27(2), 1487-1492. https://doi.org/10.1609/aaai.v27i2.18987