Legal Risks and Rewards of Artificial Intelligence in Health Care

Legal Risks and Rewards of Artificial Intelligence in Health Care

Stanford Health Policy researchers address issues of liability risk and the ethical use of AI in health care, making the case for tools that address liability and risk—while making patient safety and concerns a priority.
Stanford Health Policy's Alyce Adams addresses AI conference
SHP's Alyce Adams addresses the Stanford Health AI Week conference.
HAI-Steve Fisch

Stanford Health Policy’s Michelle Mello opened a talk on AI in medicine by spotlighting a growing concern among clinicians: compounding their longstanding worries about malpractice liability is uncertainty about new legal and ethical risks that the use of AI tools may bring.

Yet she cautioned that taking a head-in-the-sand approach would not be wise. Artificial intelligence isn’t just a passing trend. It’s becoming embedded in our health care delivery by supporting diagnoses, treatments, and streamlining the heavy burden of administrative tasks.

“My message for you today is that I think it's reasonable for people to have concerns about this area, but it would be a big mistake not to move forward with adoption and use of useful AI tools just because fear of liability,” she said at the annual Stanford Health AI Week. “The law of medical negligence is all about what the reasonable person would do. And so, by adopting basic tenets of responsible use of AI, I think is fair to say we can protect physicians fairly well from liability.”

Mello, a professor of law and of health policy, is meeting the moment herself by becoming immersed in research about the ethical use and liability risk of AI in medicine. She and colleagues at the School of Medicine were awarded a $1 million grant from the Patient-Centered Outcomes Research Institute (PCORI) and a Stanford Impact Labs grant to build a method for the ethical review of AI tools so that health care organizations, including Stanford Health Care, can implement systems that will mitigate potential ethical concerns before they cause harm.

And Mello has testified before state and federal lawmakers to emphasize the need for federal guardrails and standards for the use of AI in health care.

AI in health care and policy is a growing area of research at Stanford Health Policy, with faculty examining its benefits and pitfalls, such as these ethical and liability issues, as well as patient safety and data privacy. SHP researchers like Sherri Rose, PhD, are investigating biases behind algorithms that can lead to inaccurate results and potential harms.

Mello told the audience at one of her AI Week talks that clinician concern about the liability risks of using AI in their practices is justified.

“I think it's reasonable to be concerned,” she said. “I think there are some things that are distinctive about this technology that create interesting problems for legal scholars. One is that when it comes to patient care, there is much less testing and vetting than drugs or medical devices in general, and therefore there's such greater unknowns and potential for patient injury.”

And those problems can occur at scale when we’re using these tools at scale.

“Injuries can replicate over many, many patients, and from a lawyer's perspective, that means big damages,” Mello said. “Another issue has to do with the humans that are using the tools, particularly for generative AI, because they work so well so much of the time. It's very easy to become complacent as users over-rely on them. And this techno-optimism can lead people to not be great governors in their personal use of the tool.”

 

Stanford AI in Health Week

The weeklong event included dozens of sessions with faculty the Stanford Institute for Human-Centered AI (HAI) and Stanford Medicine, including SHP’s Rose, Alyce Adams and Sara Singer. Mello spoke about liability and ethics at sessions on AI in Medical Education, a convening of the Coalition for Health AI (CHAI), and a symposium hosted by Stanford’s Center for Artificial Intelligence in Medicine and Imaging (AIMI). Mello, Rose, and Adams spoke at an HAI policy workshop on challenges, including payment models for health care AI and engaging patients in AI innovation and governance. Additionally, Rose served as the expert leading a workshop discussion on high-quality data for AI.

Visiting SHP research scholar Charlotte Haug, executive editor of NEJM AIjoined a panel of medical journal editors. She noted while AI is rapidly developing, she encouraged researchers to slow down and follow the standard protocols of scientific discovery.

Charlotte Haug, visiting scholar SHP
Charlotte Haug

“I would like people to pay attention to the outcomes,” said Haug, a senior scientist at SINTEF Digital Health in Trondheim, Norway. “Is it clinically relevant? Is it relative for patients and for health care systems? We discuss this a lot in our editorial meetings: transparency, reproducibility—if it’s not possible to reproduce, is it really correct to publish?”

And like many of the speakers at AI Week, Haug wishes researchers would pay more attention to the concerns and needs of patients.

“Things are changing very fast and at the same time, things haven’t changed so much for patients—and I wish the patient view would be more prominent,” said Haug.

Adams, a professor of health policy and of epidemiology and population health, is focused on the patient view. She noted that Americans today not only have limited trust in our health care system, but in using AI in patient care. A 2022 Pew Research Center poll, for example, found that 60% of patients are uncomfortable about the use of AI in diagnosing disease and recommending treatments.

She told HAI workshop participants that there are some 600 policies at the state and federal level to regulate the use of AI in health care, often with the goal of protecting patients.

“Yet, how often to do we systematically engage patients and their advocates in important policy discussions about the use of AI and their health?” Adams asked. Her research lab is engaging directly with patients and health systems in the development of AI-enabled risk prediction tools.

“Our patient partners are helping us to understand what kind of risk information is most salient to them, identify the potential for unintended uses of the data and how their concerns about AI are influenced by who and how it’s being used,” said Adams. “In turn, they are gaining information about what AI is and how it might already be affecting their care.”

She said understanding the patient perspective on AI can help policymakers prioritize regulatory targets, identify unintended consequences and establish an ethical framework for its use.

AI Ethics and Governance

In one of her presentations, Mello outlined the ethical assessment process at Stanford Health Care known as FURM—Fair, Useful, and Reliable Models—for which she’s an advisor as they develop and test the evaluation mechanisms to identify these ethical models.

Mello is one of the co-founders of the Healthcare Ethical Assessment Lab for AI, or HEAL-AI, which informs the FURM process. Her team has been working with patient volunteers recruited from Stanford Health Care’s Patient & Family Partner Program to conduct ethical assessments of AI tools proposed for use at Stanford Health Care facilities. The 10 patients with diverse backgrounds have undergone training workshops on AI fundamentals and key ethnical issues, as well as participating in educational events at Stanford. Mello explained that listening to patients has surfaced ethical issues—and potential solutions—that the ethicists hadn’t thought of, as well as areas where their initial assumptions about what patients would want were incorrect.

Mello and her team are building a method that can be replicated by other universities and health care systems to help ensure AI is rolled out in a way that comports with patients’ interest.

“Although superstar institutions like Stanford do a great job of vetting tools before they're implemented and a good job of monitoring once they're implemented, that's not true of most organizations,” she said. “Most organizations don't have robust vetting and governance schemes.”

We should be putting tools out there to enable lower-resourced institutions not to have to start from zero—and that is something that we're doing here.
Michelle Mello, JD, PhD
Professor of Health Policy and of Law

Those institutions risk getting caught up in the hype around AI's potential benefits without paying enough attention to potential harms. Rose recently published an invited commentary in JAMA Health Forum addressing the need for more rigorous evaluation of algorithms prior to implementation.

Sherri Rose Portrait
Sherri Rose

Good governance of AI can help health care organizations avoid poor decisions in response to all the hype around AI, Mello said, while also protecting patients and clinical staff from harm and potential lawsuits.

“This is not rocket science,” Mello said. “These are the same principles that have always protected physicians when they interact with technology. And a lot of these are also principles that can help prevent injuries to patients in the first place—which is always the best risk strategy.”

Read More

Software Developer
News

Policy Brief: The Complexities of Race Adjustment in Health Algorithms

As policymakers, health-care practitioners, and technologists pursue the application of AI and machine learning (ML) algorithms in health care, this policy brief underscores the need for health equity research and highlights the limitations of employing technical “fixes” to address deep-seated health inequities.
Policy Brief: The Complexities of Race Adjustment in Health Algorithms
Illustration of scary AI in health care
Commentary

Regulation of Health and Health Care Artificial Intelligence

In this JAMA Viewpoint, SHP's Michelle Mello discusses the paucity of formal regulations dealing with artificial intelligence in health care and what may lie ahead.
Regulation of Health and Health Care Artificial Intelligence
Illustration of Woman Researcher Using AI
Blogs

Integrating AI into Qualitative Analysis for Health Services Research

This AcademyHealth blog post by SHP's Sara Singer and colleagues explores the use of AI to enhance qualitative analysis for HSR, including challenges, questions for consideration, and assessing utility while models are still improving.
Integrating AI into Qualitative Analysis for Health Services Research