Ethical, Legal, and Social Implications of Generative AI (GenAI) in Healthcare
Collection Editor(s):
-
Introduction
The co-evolution of computational processing power and neural network models has made revolutionary developments in generative artificial intelligence (GenAI) possible. One type of GenAI, large language models (LLMs), are disrupting a wide range of industries, including healthcare. LLMs are trained on large corpora of natural human language to predict and generate text (in chatbot form) that persuasively conveys contextual and semantic understandings. They are also being used to discover patterns in other types of data, such as genomic data, and enable the integration of multiple data types in ways that far surpass our previous capabilities.
Although few hypothesized uses are ready for application, GenAI suggests many potential benefits for healthcare, including clinical decision support, enhanced patient communication and engagement, streamlined clinical documentation and reduced administrative workloads for healthcare professionals, assisted candidate gene prioritization and selection, and accelerated drug…
- Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623.
- Alkuraya, I. F. (2023) Is artificial intelligence getting too much credit in medical genetics? American Journal of Medical Genetics Part C: Seminars in Medical Genetics, 193(3), Article e32062.
- Omiye, J. A., Lester, J. C., Spichak, S. Rotemberg, V., & Daneshjou, R. (2023). Large language models propagate race-based medicine. npj Digital Medicine, 6, Article 195.
- Blease, C. (2023). Open AI meets open notes: Surveillance capitalism, patient privacy and online record access. Journal of Medical Ethics, 50, 84–89.
- Allen, J. W., Earp, B. D., Koplin, J., & Wilkinson, D. (2024). Consent-GPT: Is it ethical to delegate procedural consent to conversational AI? Journal of Medical Ethics, 50, 77–83.
- Minssen, T., Vayena, E., & Cohen, I. G. (2023). The challenges for regulating medical use of ChatGPT and other large language models. JAMA, 330(4), 315–316.
- Mann, S. P., Earp, B. D., Nyholm, S., Danaher, J., Møller, N., Bowman-Smart, H., Hatherley, J., Koplin, J., Plozza, M., Rodger, D., Treit, P. V., Renard, G., McMillan, J., & Savulescu, J. (2023) Generative AI entails a credit–blame asymmetry. Nature Machine Intelligence, 5, 472–475.
- Duffourc, M., & Gerke, S. (2023). Generative AI in health care and liability risks for physicians and safety concerns for patients. JAMA, 330(4), 313–314.
- Spirling, A. (2023). Why open-source generative AI models are an ethical way forward for science. Nature, 616(7957), Article 413.
- Van Dis, E. A. M., Bollen, J., Zuidema, W., van Rooij, R., & Bockting, C. L. (2023). ChatGPT: Five priorities for research. Nature, 614(7947), 224–226.
- Meskó, B., & Topol, E. J. (2023). The imperative for regulatory oversight of large language models (or generative AI) in healthcare. npj Digital Medicine, 6, Article 120.
- Cohen, I. G. (2023). What should ChatGPT mean for bioethics? The American Journal of Bioethics, 23(10), 8-16.
- Rahimzadeh, V., Kostick-Quenet, K., Blumenthal Barby, J., & McGuire, A. L. (2023). Ethics education for healthcare professionals in the era of ChatGPT and other large language models: Do we still need it? The American Journal of Bioethics, 23(10), 17–27.
- Chen, J., Cadiente, A., Kasselman, L. J., & Pilkington, B. (2023). Assessing the performance of ChatGPT in bioethics: A large language model’s moral compass in medicine. Journal of Medical Ethics, 50, 97–101.
Suggested Citation
Kostick-Quenet, K. (2024). Ethical, legal, and social implications of generative AI (GenAI) in healthcare. In ELSIhub Collections. Center for ELSI Resources and Analysis (CERA). https://doi.org/10.25936/w5ry-bq46
About ELSIhub Collections
-
ELSIhub Collections are essential reading lists on fundamental or emerging topics in ELSI, curated and explained by expert Collection Editors, often paired with ELSI trainees. This series assembles materials from cross-disciplinary literatures to enable quick access to key information.