Doctors Who Use AI Are Viewed Worse by Peers

The research headline of the week belongs to a study out of Johns Hopkins University that found “doctors who use AI are viewed negatively by their peers.”

Clickbait from afar, but far from clickbait. The investigation in npj Digital Medicine surfaced interesting takeaways after randomizing 276 practicing clinicians to evaluate one of three vignettes depicting a physician: using no GenAI (the control), using GenAI as a primary decision-making tool, or using GenAI as a verification tool.

  • Participants rated the clinical skill of the physician using GenAI as a primary decision-making tool as significantly lower than the physician who didn’t use it (3.79 vs. 5.93 control on a 7-point scale). 
  • Framing GenAI as a “second opinion” or verification tool improved the negative perception of clinical skill, but didn’t fully eliminate it (4.99 vs. 5.93 control). 
  • Ironically, while an overreliance on GenAI was viewed as a weakness, the clinicians also recognized AI as beneficial for enhancing medical decision-making. Riddle us that.

Patients seem to agree. A separate study in JAMA Network Open took a look at the patient perspective by randomizing 1.3k adults into four groups that were shown fake ads for family doctors, with one key difference: no mention of AI use (the control), or a reference to the doctors using AI for administrative, diagnostic, or therapeutic purposes (Supplement 1 has all the ads).  

For every AI use case, the doctors were perceived significantly worse on a 5-point scale:

  • less competent – control: 3.85, admin AI: 3.71; diagnostic AI: 3.66; therapeutic AI: 3.58
  • less trustworthy – control: 3.88; admin AI: 3.66; diagnostic AI: 3.62; therapeutic AI: 3.61
  • less empathic – control: 4.00 ; admin AI: 3.80; diagnostic AI: 3.82; therapeutic AI: 3.72

Where’s that leave us? Despite pressure on clinicians to be early AI adopters, using it clearly comes with skepticism from both peers and patients. In other words, AI adoption is getting throttled by not only technological barriers, but also some less-discussed social barriers.

The Takeaway

Medical AI moves at the speed of trust, and these studies highlight the social stigmas that still need to be overcome for patient care to improve as fast as the underlying tech.

OpenEvidence Partners With JAMA Ahead of Next Raise

“The fastest-growing platform for doctors in history” continues to step on the gas, and OpenEvidence is reportedly on the verge of notching a $3B valuation after inking a deal to bring JAMA Network journals to its AI medical search engine.

The multi-year content agreement will make full-text articles from the American Medical Association’s JAMA, JAMA Network Open, and 11 specialty journals available directly within the OpenEvidence platform.

  • OpenEvidence’s medical search engine helps clinicians make decisions at the point of care, turning natural language queries into structured answers with detailed citations.
  • The model was purpose-built for healthcare using training data from strategic partners like the New England Journal of Medicine, which joined the platform through a similar deal earlier this year.

The Disney+ content strategy has arrived in healthcare. OpenEvidence compares its approach to streaming services that drive subscriptions through exclusive movies.

  • If a physician wants information from top journals to support decision making, they’ll either have to get it straight from the source or use OpenEvidence, just like how anyone who wants to stream Moana needs to go to Disney+.
  • The kicker is that OpenEvidence is available at no cost to verified physicians, and advertising generates all of the revenue. 

The blueprint is working like a charm. OpenEvidence has over 350k doctors using its platform plus another 50k joining each month, and it’s apparently close to raising $100M at a $3B valuation just a few months after closing its $75M Series A.

  • It’s rare to find hockey stick growth in digital health, and OpenEvidence is a good reminder that many areas of healthcare change slowly… then all at once.
  • It also isn’t too surprising to hear that VC’s like Google Ventures and Kleiner Perkins are lining up to fund a company with a similar ad-supported business model to Doximity – one of the only successful healthcare IPOs since the start of the pandemic.

The Takeaway

Content is king, and OpenEvidence is locking in partnerships to make sure its platform is wearing the crown. The results have been speaking for themselves, but healthcare’s genAI streaming wars are just getting started.

The Volume and Cost of Quality Metric Reporting

A Johns Hopkins-led study in JAMA reached a conclusion that many health systems are already all-too-familiar with: reporting on quality metrics is a costly endeavor. 

The time- and activity-based costing study estimated that Johns Hopkins Hospital spent over $5M on quality reporting activities in 2018 alone, independent of any quality-improvement efforts.

Researchers identified a total 162 unique metrics:

  • 96 were claims-based (59%) 
  • 107 were outcome metrics (66%) 
  • 101 were related to patient safety (62%) 

Preparing and reporting data for these metrics required over 100,000 staff hours, with an estimated personnel cost of $5,038,218 plus an additional $602,730 in vendor costs.

  • Claims-based metrics ($38k per metric per year) required the most resources despite being generated from “collected anyway” administrative data, which the researchers believe is likely tied to the challenge of validating ICD codes and whether comorbidities were present on admission. 

Although the $5M cost of quality reporting is a small fraction of Johns Hopkins Hospital’s $2.4B in annual expenses, extrapolating those findings to 4,100 acute care hospitals in the US suggests that we’re currently spending billions on quality reporting every year.

That conclusion raises questions that are outside the scope of this study but extremely important for understanding the true value of quality reporting.

  • Do the benefits of quality reporting outweigh the burden it places on clinicians?
  • Would the time and effort required for quality reporting be better spent on patient care?
  • Do quality metrics accurately reflect a hospital’s overall quality of care?

The Takeaway

Non-clinical administrative costs are a giant slice of the healthcare spending pie, and quality measurements unintentionally contribute due to increasing spending on chart review and coding optimization. Quantifying the burden of quality reporting is a key step to understanding its overall cost-effectiveness, and although this study doesn’t tackle that issue directly, it lays the foundation for those who are.

Get the top digital health stories right in your inbox

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Digital Health Wire team

You're all set!