Iterative Quality Control Strategies for Expert Medical Image Labeling
Keywords:Crowdsourcing, Expert, Medical Imaging, Quality Control, Machine Learning
AbstractData quality is a key concern for artificial intelligence (AI) efforts that rely on crowdsourced data collection. In the domain of medicine in particular, labeled data must meet high quality standards, or the resulting AI may perpetuate biases or lead to patient harm. What are the challenges involved in expert medical labeling? How do AI practitioners address such challenges? In this study, we interviewed members of teams developing AI for medical imaging in four subdomains (ophthalmology, radiology, pathology, and dermatology) about their quality-related practices. We describe one instance of low-quality labeling being caught by automated monitoring. The more proactive strategy, however, is to partner with experts in a collaborative, iterative process prior to the start of high-volume data collection. Best practices including 1) co-designing labeling tasks and instructional guidelines with experts, 2) piloting and revising the tasks and guidelines, and 3) onboarding workers enable teams to identify and address issues before they proliferate.
How to Cite
Freeman, B., Hammel, N., Phene, S., Huang, A., Ackermann, R., Kanzheleva, O., Hutson, M., Taggart, C., Duong, Q., & Sayres, R. (2021). Iterative Quality Control Strategies for Expert Medical Image Labeling. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 9(1), 60-71. Retrieved from https://ojs.aaai.org/index.php/HCOMP/article/view/18940
Full Archival Paperss