Michelle Mello Testifies Before U.S. Senate on AI in Health Care

In her testimony before the U.S. Senate Finance Committee, Mello emphasized the need for federal guardrails and standards regarding the use of artificial intelligence in health care.
Michelle Mello testifies before U.S. Senate Committee on Finance

 

Michelle Mello, JD, PhD, testified before the U.S. Senate Committee on Finance in a hearing focused on the promise and pitfalls of using artificial intelligence in health care, noting that while Stanford has adopted stringent review processes for using AI in patient care, the federal government should establish standards for the responsible use of AI tools to ensure that such vetting becomes widespread.

Senate Committee Chairman Ron Wyden (D-Oregon) opened the hearing by noting that the use of AI technology was no doubt making the U.S. health care system more efficient. 

“But some of these big data systems are riddled with bias that discriminate against patients based on race, gender, sexual orientation, and disability,” he said. “It’s painfully clear not enough is being done to protect patients from bias in AI.”

Wyden has introduced a bill, the Algorithmic Accountability Act, that would lay the groundwork to root out algorithmic bias from these systems. “As applied to health care, my bill would require health care systems to regularly assess whether the AI tools they develop or select are being used as intended and aren’t perpetuating harmful bias.”

Mello, a professor of health policy and of law, told the Senate committee that Stanford Health Care has developed a review process and that for every AI tool proposed for deployment in Stanford hospitals, data scientists evaluate the model for bias and clinical utility and that ethicists interview patients, clinical care providers, and AI tool developers to learn what matters to them and what they’re worried about. 

“We find that with just a small investment of effort, we can spot potential risks, mismatched expectations, and questionable assumptions that we and the AI designers hadn’t thought about,” she said. “I have studied patient safety, health care quality regulation, and data ethics for more than two decades. I apply that expertise in our team’s evaluations of all AI tools proposed for use in Stanford Health Care facilities, which care for over 1 million patients per year, and our recommendations about whether and how they can be used safely and effectively.”

She emphasized the federal government should establish standards for the use of health care tools—yet because the field is rapidly developing, those standards must be adaptable. 

“As countless historical examples of medical innovations have shown, having good intentions isn’t enough to protect against harm. The community needs guardrails and guidance,” Mello told the committee. “In light of how quickly things are moving in the field, we have to have the humility to acknowledge that we don’t know what the best standards will be two years from now. Regulation needs to be adaptable or else it will risk irrelevance—or worse, chilling innovation without producing any countervailing benefits. The wisest course now is for the federal government to foster a consensus-building process that brings experts together to create national consensus standards and processes for evaluating proposed uses of AI tools.”

To take a simple analogy, if we want to avoid motor vehicle accidents, we can’t just set design standards for cars. Road safety features, driver’s licensing requirements, and rules of the road all play important roles in keeping people safe.
Michelle Mello, JD, PhD
Professor of Health Policy and of Law

 

Mello highlighted three key things she has learned working in this area for Stanford Health Policy, the center for Stanford Human-Centered Artificial Intelligence and Stanford Law School

  1. While hospitals are starting to recognize the need to vet AI tools before use, most health care organizations don’t have robust review processes yet.
  2. Second, to be effective, governance can’t focus only on the algorithm. It must also encompass how the algorithm is integrated into clinical workflow.
  3. Because the success of AI tools depends on the adopting organization’s ability to support them through vetting and monitoring, the federal government should establish standards for organizational readiness and responsibility to use healthcare AI tools, as well as for the tools themselves.

 

Mello called on the Senate committee to take into account that private health care insurers are a big part of the fair-use AI equation. 

“Just as with health care organizations, real patient harm can result when insurers use algorithms to make coverage decisions,” she said. “For instance, members of Congress have expressed concern about Medicare Advantage plans’ use of an algorithm marketed by naviHealth in prior authorization decisions for post-hospital care for older adults. In theory, human reviewers were making the final calls while merely factoring in the algorithm output; in reality, they had little discretion to overrule the algorithm. This is another illustration of why humans’ responses to model output—their incentives and constraints—merit oversight.”

Indeed, Senator Elizabeth Warren (D-MA) noted that more than 31 million senior Americans are enrolled in Medicare Advantage plans, but that the private insurance companies managing these plans "are not playing by the rules."

“Private insurers are routinely delaying and denying care because doing so boosts their profits,” Warren said, asking Mello whether federal law requires all insurance companies to follow Medicare coverage guidelines, even if they are using AI algorithms to determine coverage.

Yes, Mello responded, these companies must follow the federal guidelines. She noted that she was pleased that the Centers for Medicare & Medicaid Services (CMS) recently adopted a Final Rule requiring that all medical decisions made by algorithms be reviewed by a medical professional.

Warren then asked Mello: “In addition to the rule that they agency has just finalized, what measures do you think CMS should take to ensure that private insurers are not leveraging AI tools to unlawfully deny care?”

“I was very heartened to see that in the FAQs released this week on that Final Rule that CMS plans to beef up its audits in 2024 and specifically look at these denials,” Mello replied. “But beyond that I think additional clarification is needed to the plans about what it means to use algorithms properly or improperly. For example, for electronic health records, it didn’t’ just say make meaningful use of those records, it laid out standards for what meaningful use was.”

Read the Full Testimony

Watch the Full Hearing (Mello comes in at 48:00) and then again during the Q&A after the testimony.

Read More

Illustration of AI in health care
Q&As

Exploring Liability Risks of Using AI Tools in Patient Care

Research led by SHP’s Michelle Mello provides some clarity regarding liability over AI technologies that are rapidly being introduced to health care. She and her co-author analyzed more than 800 tort cases involving both AI and conventional software in health care and non-health-care contexts to see how decisions related to AI and liability might play out in the courts.
cover link Exploring Liability Risks of Using AI Tools in Patient Care
Woman in health care looks at computer images
Commentary

President Biden’s Executive Order on Artificial Intelligence—Implications for Health Care Organizations

SHP's Michelle Mello and Stanford Medicine colleagues write in the journal JAMA that President Biden's recent executive order on Artificial Intelligence could have significant implications for health-care organizations.
cover link President Biden’s Executive Order on Artificial Intelligence—Implications for Health Care Organizations
Female Healthcare Worker Examines Baby with Stethoscope
Q&As

The Safe Inclusion of Pediatric Data in AI-Driven Medical Research

AI algorithms often are trained on adult data, which can skew results when evaluating children. A new perspective piece by SHP's Sherri Rose and several Stanford Medicine colleagues lays out an approach for pediatric populations.
cover link The Safe Inclusion of Pediatric Data in AI-Driven Medical Research