AI Alone Will Not Reduce the Administrative Burden of Health Care

In this JAMA Network Viewpoint, Stanford Health Policy's Kevin Schulman and Perry Nielsen Jr. look at the impact Large Language Models could have on our complex health-care billing system.
AI and Medicine Illustration

“Large language models (LLMs) are some of the most exciting innovations to come from artificial intelligence research. The capacity of this technology is astonishing, and there are multiple different use cases being proposed where LLMs can solve pain points for physicians—everything from assistance with patient portal messages to clinical decision support for chronic care management to compiling clinical summaries. Another often discussed opportunity is to reduce administrative costs such as billing and insurance-related costs in health care. However, before jumping into technology as a solution, considering why the billing process is so challenging in the first place may be a better approach. After all, the prerequisite for a successful LLM application is the presence of “useful patterns” in the data.”

The co-authors of this JAMA Network Viewpoint — Kevin A. Schulman, MD, a courtesy faculty member at SHP; Perry K. Nielsen Jr., a PhD student at SHP, and Kavita Patel, MD, MPH, director of policy at Stanford Byers Center for Biodesign — go on to caution that LLMs are not a panacea when we have a health-care system with some 57 billion negotiated prices in the U.S. market — or 94,335 for each service code.

“LLMs are clearly an exciting technology, but the current market environment is far from optimized to enable this technology to provide a solution for practicing physicians. In fact, adding LLMs to this milieu might exacerbate billing challenges for physicians. For example, if LLMs were used to support clinical documentation, health insurers could challenge the documentation as an LLM “hallucination.” In such a dispute, a source of the truth of what services were actually provided would no longer exist. Technology could grind the billing process to a halt.”

Read the Full JAMA Network Viewpoint

Read More

Female Healthcare Worker Examines Baby with Stethoscope
Q&As

The Safe Inclusion of Pediatric Data in AI-Driven Medical Research

AI algorithms often are trained on adult data, which can skew results when evaluating children. A new perspective piece by SHP's Sherri Rose and several Stanford Medicine colleagues lays out an approach for pediatric populations.
cover link The Safe Inclusion of Pediatric Data in AI-Driven Medical Research
Michelle Mello and Neel Guha
Commentary

ChatGPT and Physicians’ Malpractice Risk

In this JAMA Forum perspective, SHP's Michelle Mello, professor of health policy and of law, and Neel Guha, a Stanford Law School student and PhD candidate in computer science, write that medical advice from AI chatbots is not yet highly accurate, so physicians should only use these systems to supplement more traditional forms of medical guidance.
cover link ChatGPT and Physicians’ Malpractice Risk
Stock Photo Black Doctor
News

The Future of Everything Podcast with Alyce Adams: Health Outcomes for Underserved Populations

Alyce Adams, an expert in health equity and policy explains how new approaches in communities and health systems are improving care delivery for traditionally underserved populations.
cover link The Future of Everything Podcast with Alyce Adams: Health Outcomes for Underserved Populations