Skip to main content

Ethical, Legal, and Social Implications of Generative AI (GenAI) in Healthcare

Publication Date:
Updated:

Collection Editor(s):

Collection Editor(s)
Name & Degree
Kristin Kostick-Quenet, PhD
Work Title/Institution
Assistant Professor, Center for Medical Ethics and Health Policy, Baylor College of Medicine
  • Introduction

    The co-evolution of computational processing power and neural network models has made revolutionary developments in generative artificial intelligence (GenAI) possible. One type of GenAI, large language models (LLMs), are disrupting a wide range of industries, including healthcare. LLMs are trained on large corpora of natural human language to predict and generate text (in chatbot form) that persuasively conveys contextual and semantic understandings. They are also being used to discover patterns in other types of data, such as genomic data, and enable the integration of multiple data types in ways that far surpass our previous capabilities. 

    Although few hypothesized uses are ready for application, GenAI suggests many potential benefits for healthcare, including clinical decision support, enhanced patient communication and engagement, streamlined clinical documentation and reduced administrative workloads for healthcare professionals, assisted candidate gene prioritization and selection, and accelerated drug…

    The co-evolution of computational processing power and neural network models has made revolutionary developments in generative artificial intelligence (GenAI) possible. One type of GenAI, large language models (LLMs), are disrupting a wide range of industries, including healthcare. LLMs are trained on large corpora of natural human language to predict and generate text (in chatbot form) that persuasively conveys contextual and semantic understandings. They are also being used to discover patterns in other types of data, such as genomic data, and enable the integration of multiple data types in ways that far surpass our previous capabilities. 

    Although few hypothesized uses are ready for application, GenAI suggests many potential benefits for healthcare, including clinical decision support, enhanced patient communication and engagement, streamlined clinical documentation and reduced administrative workloads for healthcare professionals, assisted candidate gene prioritization and selection, and accelerated drug discovery. While these emerging tools may improve care, their ethical, legal, and social implications (ELSI) remain unclear. 

    At the time of writing, no GenAI systems have been reviewed by the US Food and Drug Administration (FDA) and regulatory frameworks are unequipped to address two novel features of GenAI: 1) its ability to adapt and improve performance over time or in response to changing conditions and 2) its capacity for continuous learning through unsupervised, “autodidactic” self-teaching. These features make GenAI a “moving target” that potentially requires new regulatory approaches.

    The use of GenAI in the healthcare setting raises numerous ethical concerns. Notably, LLMs are known to “hallucinate,” or generate outputs not grounded in fact, logic, or their training data. Accurate or not, results are conveyed with a tone of confidence that may easily be mistaken by humans as fact. Integrating unverified outputs into healthcare scenarios could negatively impact patient safety as well as expose practitioners to liability. Further, the quality and reliability of responses vary in line with the quality of user’s prompts, an observation that has spawned a new focus on “prompt engineering” and hybrid intelligence, which is a broad set of approaches that aim to optimize human-machine interactions in ways that capitalize on both human and AI capacities.

    A defining feature of GenAI technology is its capacity to generate new content in the style of a select portion of the training data. While this allows for the ability to generate novel outputs that emulate established styles (e.g., a new sonnet in the style of Shakespeare) or to tailor outputs for specific users or use cases, it can also reveal sensitive or personal information contained in training data sets. This is because model parameters represent specific features of data sets. The greater the number of parameters, the more features are inextricably embedded in models, and the more closely outputs mirror the training data. This capacity has drawn criticism from artists and other content creators who call for greater consent, credit, and compensation for the inclusion of their copyrighted information in GenAI training data. In healthcare settings, the potential for unintended data security breaches raises significant concerns about patient privacy and ability to consent to this use of their information.

    GenAI models are also difficult to audit, in part because they are often proprietary rather than open source. Companies are reluctant to disclose which data was used to train their models. This lack of transparency has raised concerns about bias, generalizability, and fair attribution related to model outputs. Studies have revealed that LLMs reproduce racial, ethnic, gender and other biases found in human language and can produce outputs that can perpetuate “race-based” medicine or other forms of discrimination in healthcare. Further, given that internet data overrepresent younger users and those from Western, English-speaking countries, LLMs have been criticized for propagating dominant rather than diverse viewpoints. Potential injustices for marginalized groups are compounded by the fact that non-Western nations may be more likely to pay the price for climate changes caused by the exploding energy requirements (rising over 300,000x in just six years) of LLMs and other GenAI systems.

    Some researchers are asking, “how big is too big?” and calling for greater transparency and more careful curation of training data sets for GenAI. This collection offers an introduction to the ELSI of GenAI that have been identified so far and highlights the need for participatory and anticipatory pathways to GenAI policy and regulation. It will be updated with new literature as it is published. You are invited to email your suggestions to [email protected].

Collection Header
Bias, Error and Hallucinations in GenAI & LLMs
Body
Collection Header
Privacy & Consent
Body
Collection Header
Accountability and Liability
Body
Collection Header
GenAI, Bioethics, and Medical Ethics Education
Body
Tags
ELSI
generative AI
large language models
bioethics
data privacy

Suggested Citation

Kostick-Quenet, K. (2024). Ethical, legal, and social implications of generative AI (GenAI) in healthcare. In ELSIhub Collections. Center for ELSI Resources and Analysis (CERA). https://doi.org/10.25936/w5ry-bq46

Share

About ELSIhub Collections

  • ELSIhub Collections are essential reading lists on fundamental or emerging topics in ELSI, curated and explained by expert Collection Editors, often paired with ELSI trainees. This series assembles materials from cross-disciplinary literatures to enable quick access to key information.

ELSIhub Collections