Addressing Algorithmic Unfairness in Healthcare Artificial Intelligence
Artificial intelligence (AI) healthcare technology can solve our most complex and intransigent health issues. It has the potential to enhance healthcare quality, improve access, reduce cost, and deliver highly personalized care. However, delivering those solutions requires large, historical, broadly representative, and well-organized data from an affected population as well as from the individual that is seeking care. Without representative data and algorithms that render fair, accurate results, AI will exacerbate, rather than solve, longstanding health issues, especially for individuals that suffer from healthcare disparities.
Data Set Curation
The data sets that currently inform AI healthcare—upon which most new data sets are created—are frequently gathered from high-resourced, predominantly white communities, which may not be representative of treatment populations. Organizations with the goal of improving AI fairness by using representative data to create and improve algorithms, may use, combine, duplicate, or share individually identifiable health data collected from communities where greater data are needed, without their informed consent. For…
- Panch, T., Mattie, H., & Atun, R. (2019). Artificial intelligence and algorithmic bias: Implications for health systems. Journal of Global Health, 9(2), 1-5.
- Prince, A. E. R., & Schwarcz, D. (2020). Proxy discrimination in the age of artificial intelligence and big data. Iowa Law Review, 105(3), 1257–1318.
- Hoffman, S., & Podgurski, A. (2020). Artificial intelligence and discrimination in health care. Yale Journal of Health Policy, Law and Ethics, 19(3), 12–18.
- Takshi, S. (2021). Unexpected inequality: Disparate-impact from artificial intelligence in healthcare decisions. Journal of Law and Health, 34(2), 215–251.
- Price, W. N., II. (2019). Medical AI and contextual bias. Harvard Journal of Law & Technology, 33(1), 65–116.
- Grote, T., & Berens, P. (2020). On the ethics of algorithmic decision-making in healthcare. Journal of Medical Ethics, 46(3), 205–211.
- Tschider, C. A. (2021). AI’s legitimate interest: Towards a public benefit privacy model. Houston Journal of Health Law & Policy, 21(1), 125–184.
- The Center for Open Data Enterprise. (2019). Sharing and utilizing health data for AI applications.
- Ford, R. A., & Price, W. N., II. (2016). Privacy and accountability in black-box medicine. Michigan Telecommunications and Technology Law Review, 23(1), 1–44.
- Tschider, C. A. (2021). Beyond the “black box.” Denver Law Review, 98(3), 683–724.
- Ada Lovelace Institute. (2022). Algorithmic impact assessment: A case study in healthcare.
- Selbst, A. D., & Barocas, S. (2018). The intuitive appeal of explainable machines. Fordham Law Review, 87(3), 1085–1140.
- Linardatos, P., Papastefanopoulos, V., & Kotsiantis, S. (2021). Explainable AI: A review of machine learning interpretability models. Entropy, 23(1), 1-45.
- Price, W. N., II, & Rai, A. K. (2021). Clearing opacity through machine learning. Iowa Law Review, 106(2), 775–812.
- Tschider, C. A. (2021). Legal opacity: Artificial intelligence’s sticky wicket. Iowa Law Review, 106(3), 126-164.
About ELSIhub Collections
ELSIhub Collections are essential reading lists on fundamental or emerging topics in ELSI, curated and explained by expert Collection Editors, often paired with ELSI trainees. This series assembles materials from cross-disciplinary literatures to enable quick access to key information.