Ethical and Legal Challenges Posed by the Implementation of AI Applications in the Healthcare Setting
Healthcare institutions are implementing artificial intelligence (AI) at a rapid pace. The hope is that AI will improve the quality of care and reduce costs in the long run. However, the deployment of AI in healthcare settings also presents new ethical and legal challenges. For example, AI can reproduce health disparities and pose a risk to patients if human factors, like implicit or explicit bias, are present in the training data set or it lacks representation from population subgroups. AI also raises challenges for regulators like the U.S. Food and Drug Administration (FDA). How should regulators like the FDA deal with so-called “adaptive” AI algorithms that continuously learn or opaque (“black-box”) algorithms? Regulators need to update the regulatory framework for AI-based medical devices as soon as possible to ensure that AI-based medical devices are reasonably safe and effective when introduced on the market and stay safe and effective throughout their entire life cycle. A new regulatory framework is particularly important given that the FDA has already permitted the marketing of more than 340 AI-based medical devices, and shortcomings in the framework can compromise…
- Gerke, S., Minssen, T., & Cohen, I. G. (2020). Ethical and legal challenges of artificial intelligence-driven healthcare. In A. Bohr & K. Memarzadeh (Eds.), Artificial intelligence in healthcare (pp. 295–336). Elsevier.
- Rigby, J. M. (2019). Ethical dimensions of using artificial intelligence in health care. AMA Journal of Ethics, 21(2), 121–124.
- World Health Organization. (2021). Ethics and governance of artificial intelligence for health.
- U.S. Food & Drug Administration. (2019). Proposed regulatory framework for modifications to artificial intelligence/machine learning (AI/ML)-based software as a medical device (SaMD). Discussion paper and request for feedback.
- U. S. Food & Drug Administration. (2021). Artificial intelligence/machine learning (AI/ML)-based software as a medical device (SaMD) action plan.
- Gerke, S. (2021). Health AI for good rather than evil? The need for a new regulatory framework for AI-based medical devices. Yale Journal of Health Policy, Law, & Ethics, 20(2), 433–513.
- Cho, M. K. (2021). Rising to the challenge of bias in health care AI. Nature Medicine, 27(12), 2079–2081.
- Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453.
- Parikh, R. B., Teeple, S., & Navathe, A. S. (2019). Addressing bias in artificial intelligence in health care. JAMA, 322(24), 2377–2378.
- Dinerstein v. Google, LLC, 484 F. Supp. 3d 561 (N.D. Ill. 2020).
- Kaissis, G. A., Makowski, M. R., Rückert, D., & Braren, R.F. (2020). Secure, privacy-preserving and federated machine learning in medical imaging. Nature Machine Intelligence, 2(6), 305–311.
- Price, W. N., & Cohen, I. G. (2019). Privacy in the age of medical big data. Nature Medicine, 25(1), 37–43.
- Evans, B. J., & Pasquale, F. (forthcoming 2022). Product liability suits for FDA-regulated AI/ML software. In I. G. Cohen, T. Minssen, W. N. Price II, C. Robertson, & C. Shachar (Eds.), The future of medical device regulation. Cambridge University Press.
- Froomkin, A. M., Kerr, I., & Pineau, J. (2019). When AIs outperform doctors: confronting the challenges of a tort-induced over-reliance on machine learning. Arizona Law Review, 61(1), 33–99.
- Price, W. N., Gerke, S., & Cohen, I. G. (2019). Potential liability for physicians using artificial intelligence. JAMA, 322(18), 1765–1766.
About ELSIhub Collections
ELSIhub Collections are essential reading lists on fundamental or emerging topics in ELSI, curated and explained by expert Collection Editors, often paired with ELSI trainees. This series assembles materials from cross-disciplinary literatures to enable quick access to key information.