AI Can Help Doctors Change Their Minds

A recent study out of Stanford explored whether doctors would revise their medical decisions in light of new AI-generated information, finding that docs are more than willing to change their minds despite being just as vulnerable to cognitive biases as the rest of us.

Here’s the set up published in Nature Communications Medicine

  • 50 physicians were randomized to watch a short video of either a white male or black female patient describing their chest pain with an identical script.
  • The physicians made triage, diagnosis, and treatment decisions using any non-AI resource.
  • The physicians were then given access to GPT-4 (which they were told was an AI system that had not yet been validated) and allowed to change their decisions.

The initial scores left plenty of room for improvement.

  • The docs achieved just 47% accuracy in the white male patient group.
  • The docs achieved a slightly better 63% accuracy in the black female patient group.

The physicians were surprisingly willing to change their minds based on the AI advice.

  • Accuracy scores with AI improved from 47% to 65% in the white male group.
  • Accuracy scores with AI improved from 63% to 80% in the black female group.

Not only were the physicians open to modifying their decisions with AI input, but doing so made them more accurate without introducing or exacerbating demographic biases.

  • Both groups showed nearly identical magnitudes of improvement (18%), suggesting that AI can augment physician decision-making while maintaining equitable care.
  • It’s worth noting that the docs used AI as more than a search engine, asking it to bring in new evidence, compare treatments, and even challenge their own beliefs [Table].

The Takeaway

Although having the doctors go first means that AI didn’t save them any time in this study – and actually increased time per patient – it showed that flipping the paradigm from “doctors checking AI’s work” to “AI helping doctors check their own work” has the potential to improve clinical decisions without amplifying biases.

The Volume and Cost of Quality Metric Reporting

A Johns Hopkins-led study in JAMA reached a conclusion that many health systems are already all-too-familiar with: reporting on quality metrics is a costly endeavor. 

The time- and activity-based costing study estimated that Johns Hopkins Hospital spent over $5M on quality reporting activities in 2018 alone, independent of any quality-improvement efforts.

Researchers identified a total 162 unique metrics:

  • 96 were claims-based (59%) 
  • 107 were outcome metrics (66%) 
  • 101 were related to patient safety (62%) 

Preparing and reporting data for these metrics required over 100,000 staff hours, with an estimated personnel cost of $5,038,218 plus an additional $602,730 in vendor costs.

  • Claims-based metrics ($38k per metric per year) required the most resources despite being generated from “collected anyway” administrative data, which the researchers believe is likely tied to the challenge of validating ICD codes and whether comorbidities were present on admission. 

Although the $5M cost of quality reporting is a small fraction of Johns Hopkins Hospital’s $2.4B in annual expenses, extrapolating those findings to 4,100 acute care hospitals in the US suggests that we’re currently spending billions on quality reporting every year.

That conclusion raises questions that are outside the scope of this study but extremely important for understanding the true value of quality reporting.

  • Do the benefits of quality reporting outweigh the burden it places on clinicians?
  • Would the time and effort required for quality reporting be better spent on patient care?
  • Do quality metrics accurately reflect a hospital’s overall quality of care?

The Takeaway

Non-clinical administrative costs are a giant slice of the healthcare spending pie, and quality measurements unintentionally contribute due to increasing spending on chart review and coding optimization. Quantifying the burden of quality reporting is a key step to understanding its overall cost-effectiveness, and although this study doesn’t tackle that issue directly, it lays the foundation for those who are.

Get the top digital health stories right in your inbox

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Digital Health Wire team

You're all set!