Quality Control Challenges in Crowdsourcing Medical Labeling

Olga Kanzheleva
Caitlin Taggart
(2019)

Abstract

Crowdsourcing has enabled the collection, aggregation and refinement of human knowledge and judgment, i.e. ground truth, for problem domains with data of increasing complexity and scale.
This scale of ground truth data generation, especially towards the development of machine learning based medical applications that require large volumes of consistent diagnoses, poses significant and unique challenges to quality control.
Poor quality control in crowdsourced labeling of medical data can result in undesired effects on patients' health.
In this paper, we study medicine-specific quality control problems, including the diversity of grader expertise and diagnosis guidelines' ambiguity in novel datasets of three eye diseases.
We present analytical findings on physicians' work patterns, evaluate existing quality control methods that rely on task completion time to circumvent the scarcity and cost problem of generating ground truth medical data, and share our experiences with a real-world system that collects medical labels at scale.