Ethical Obligations to Inform Patients About Use of AI Tools

Ethical Obligations to Inform Patients About Use of AI Tools

To help health care leaders and clinicians navigate the thorny terrain of using artificial intelligence (AI) tools in their testing and care, SHP's Michelle Mello and colleagues provide a framework for deciding what patients should be told about AI tools.
Robotics AI tools knee surgery Getty Images

Surveys show that patients struggle with the idea of clinicians using AI in their care. To address this, Michelle Mello and Stanford colleagues developed a framework to guide providers on what patients should be told about AI tools.

“Permeation of artificial intelligence (AI) tools into health care tests traditional understandings of what patients should be told about their care,” writes Mello, JD, PhD, a professor of health policy and of law, in this JAMA Perspective

“Despite the general importance of informed consent, decision support tools (e.g., automatic electrocardiogram readers, rule-based risk classifiers, and UpToDate summaries) are not usually discussed with patients even though they affect treatment decisions," the researchers write. "Should AI tools be treated similarly? The legal doctrine of informed consent requires disclosing information that is material to a reasonable patient’s decision to accept a health care service, and evidence suggests that many patients would think differently about care if they knew it was guided by AI. In recent surveys, 60% of US adults said they would be uncomfortable with their physician relying on AI, 70% to 80% had low expectations AI would improve important aspects of their care, only one-third trusted health care systems to use AI responsibly, and 63% said it was very true that they would want to be notified about use of AI in their care.

“To help health care leaders and practitioners navigate this thorny terrain, we provide a framework for deciding what patients should be told about AI tools. In contrast to prior work, we focus on tools that involve human oversight because use of autonomous AI in health care is rare.”

Read the Full JAMA Perspective

Listen to Related Podcast

 

Michelle Mello JAMA Law and Medicine podcast

Read More

Stanford Health Policy's Alyce Adams addresses AI conference
News

Legal Risks and Rewards of Artificial Intelligence in Health Care

Stanford Health Policy researchers address issues of liability risk and the ethical use of AI in health care, making the case for tools that address liability and risk—while making patient safety and concerns a priority.
Legal Risks and Rewards of Artificial Intelligence in Health Care
Illustration of scary AI in health care
Commentary

Regulation of Health and Health Care Artificial Intelligence

In this JAMA Viewpoint, SHP's Michelle Mello discusses the paucity of formal regulations dealing with artificial intelligence in health care and what may lie ahead.
Regulation of Health and Health Care Artificial Intelligence
Illustration of Woman Researcher Using AI
Blogs

Integrating AI into Qualitative Analysis for Health Services Research

This AcademyHealth blog post by SHP's Sara Singer and colleagues explores the use of AI to enhance qualitative analysis for HSR, including challenges, questions for consideration, and assessing utility while models are still improving.
Integrating AI into Qualitative Analysis for Health Services Research