ELSIcon2022 • Paper • June 3, 2022
Ariadne Nichol, Meghan Halley, Carole Federico, Pamela Sankar, Mildred Cho
Machine learning predictive analytics (MLPA) applications have attracted significant investment in recent years. Examples include tools marketed for predicting disease onset and healthcare costs, as well as assistive technology for making treatment recommendations for patient care. MLPA-based products are not the typical medical device; the tools are dynamic as they constantly learn and iterate. Thus, the FDA is testing novel process-based regulatory approaches for these unique products. For this process-based approach to be successful, it will have to rely on developers’ awareness of potential harms and their perception of whose responsibility it is to mitigate such harms. To better understand what developers see as key safety concerns and potential unintended consequences of their products, we interviewed 40 developers currently working on MLPA-based products for healthcare. We found that developers, in the aggregate, described potential harms to individuals, groups and health systems. We could distinguish harms that were inherent to machine learning and those that arose from the health care or industry contexts in which the applications were used or developed. Developers also described factors that could contribute to or mitigate those harms. Our findings suggest that developers could be receptive to process-based regulatory frameworks but that much education and guidance would be needed.