|
Human Factors of AI, One Medical, and Too Much Trial Data November 3, 2025
|
|
|
|
|
Together with
|
|
|
|
“The first role of AI is to get physicians back to the state where they make medical decisions.”
|
|
Nabla CEO Alex LeBrun
|
|
|
|
The majority of AI research centers on model performance, but a new paper in JAMIA poses five questions to guide the discussion around how physicians actually interact with AI during diagnosis.
A little reframing goes a long way. As the clinical scope of AI expands alongside its capabilities, the interface between the models and doctors is becoming increasingly important.
- Researchers from UCLA and Tufts University point out that this “human-computer interface” is essential to make sure AI is properly integrated into care delivery, serving as the first line of defense against common AI pitfalls like distracting doctors or giving them too much confidence in its answers.
Here’s the questions they came up with, and why they’re each important:
Question 1: What type of information and format should AI present?
- Why it’s important: Deciding how information gets presented is just as important as deciding what information to present. Format affects doctors’ attention, diagnostic accuracy, and possible interpretive biases.
Question 2: Should AI provide that information immediately, after initial review, or be toggled on and off by the physician?
- Why it’s important: Immediate information can lead to a biased interpretation, while delayed cues can help physicians maintain their hard-earned diagnostic skills by allowing them to fully engage in each diagnosis.
Question 3: How does AI show its reasoning?
- Why it’s important: Clear explanations of how AI arrives at a decision can highlight features that were ruled in or out, provide “what if” explanations, and more effectively align with doctors’ clinical reasoning.
Question 4: How does AI affect bias and complacency?
- Why it’s important: When physicians lean too heavily on AI, they might rely less on their own critical thinking, widening the space for an accurate diagnosis to slip past them.
Question 5: What are the risks of long-term reliance on AI?
- Why it’s important: Long-term AI reliance could end up eroding learned diagnostic abilities. We recently covered a great study in The Lancet that investigated the topic.
The Takeaway
AI holds enormous potential to improve clinical decision-making, but poor integration could end up doing more harm than good. This paper provides a solid framework to push the field from “Can AI detect disease?” to “How should AI help doctors detect disease without introducing new risks?”
|
|
Join Nabla at Ending Clinician Burnout Summit
Nabla is partnering with the Ending Clinician Burnout Global Summit 2025 (Nov 6-7, virtual), a global community working together to reimagine healthcare and build lasting solutions to the clinician burnout crisis. This two-day event will bring together leading institutions like Mayo Clinic, Duke Health, and Johns Hopkins to design systemic changes that protect clinicians and strengthen patient care. Learn more and register here.
|
|
10 Strategies to Expedite Provider Credentialing
Lagging provider credentialing workflows can create a wave of unexpected care delays and financial setbacks. Sidestep these pitfalls by tuning into Medallion’s recording of 10 Key Strategies to Expedite Provider Credentialing and see how your organization can keep a well-oiled team and patient care intact through the ever-evolving changes seen in healthcare.
|
|
- One Medical + Cleveland Clinic: Amazon One Medical and Cleveland Clinic opened their first collaborative primary care office in Northeast Ohio. The new location offers preventive screenings and chronic disease management (for conditions like diabetes and hypertension), as well as same-day care for common needs. The duo plans to open more offices throughout 2026 as they look to blend One Medical’s hybrid primary care model with Cleveland Clinic’s network of hospitals and specialists.
- Leveraging Clinician Expertise With AI Agents: MIT Technology Review put out a great interview with Nabla CEO Alex LeBrun exploring how ambient assistants are finally “letting doctors be doctors” and why augmenting them with agentic AI is poised to have a huge impact. Healthcare is a complex web of dependencies between different workflows and processes, and LeBrun points out that “everybody would like to get rid of these things, but it’s not possible because you would need to change everything at once.” Agentic AI offers a long-awaited solution to “improve the process without getting rid of the legacy infrastructure.”
- Goodbye Independent Providers: The latest NBER data shows what many studies have shown before: hospital ownership of physician practices is skyrocketing. From 2008 to 2022, the percentage of physicians acquired by either a hospital or health system nearly doubled, from 27.5% to 52.1%. The growth was seen across every specialty, especially surgical specialties. Cardiologists have been the most impacted (hospital ownership up 38.4%), followed by general surgeons (28.1%), while primary care physicians have seen a less pronounced increase of “only” 18.1%.
- OutcomesAI Seed Round: OutcomesAI closed $10M in seed funding to scale AI-enabled nursing care using its Glia platform. Glia combines AI voice agents that manage patient calls – both inbound and outbound – with licensed nurses for services like triage, virtual care, and post-acute follow-ups. The voice agents handle everything from scheduling visits to collecting symptoms, while the nurses leverage AI support such as real-time scribing and protocol guidance to expand clinical capacity.
- Heidi + MaineGeneral: Heidi published a solid breakdown of its partnership with MaineGeneral Health, which saw 98% of clinicians adopt the ambient AI assistant in the first phase implementation. MGH selected Heidi because of its specialty depth and clinician-first design, which we’ll also say was impressive to see first-hand at HLTH. Heidi takes the unique approach of “focusing on experience and workflow over EHR integration,” and investors recently backed the strategy with $65M of Series B funding.
- Too Much of a Good Thing? In the age of AI, data is king, but new research out of Tufts University suggests that collecting too much data can hold back clinical trials. Researchers found that nearly one-third of the information collected across 105 trials was either “non-core” or “non-essential,” and more than half of that extra data came from patient surveys. The authors conclude that clinical teams should think through protocol design to avoid costly mistakes with “downstream site feasibility and site burden.”
- Atropos Evidence Agent: Atropos Health launched its Atropos Evidence Agent to embed real-world evidence generation directly into the EHR. Stanford Health Care is already leveraging the agent to generate the RWE needed to provide documentation for treatment options or other care decisions – without the physician ever needing to ask a question. Atropos plans to take the collaboration a step further by piloting direct integration into Microsoft’s Dragon Copilot ambient workflow.
- LLMs Have Great Bedside Manner: LLMs are officially more empathetic than healthcare professionals, or at least better at pretending to be. A meta-analysis of 13 studies comparing clinicians with GenAI showed that the humans were viewed as less empathetic in all but one of them. This research will probably get referenced quite a bit, but time will tell if generative empathy is actually better than the real thing.
- Mental Health to Dominate DHAC Discussion: The next meeting of the FDA’s Digital Health Advisory Committee on November 6 will be dominated by a discussion of digital mental health technologies. The FDA’s agenda for the DHAC meeting includes presentations on generative AI and digital mental health devices, regulatory considerations, and other issues. DHAC’s first meeting a year ago concentrated on generative AI regulation, especially post-market monitoring.
- More Good News for GLP-1s: A sub-analysis of Novo Nordisk’s SELECT trial suggests semaglutide – AKA Ozempic – has more cardiovascular benefits than simple body fat reduction. The research included 17.6k overweight/obese patients with cardiovascular disease and without diabetes, revealing that semaglutide reduced major adverse cardiovascular events consistently across all baseline weights and waist circumferences. While greater waist reduction correlated with lower MACE risk (~33% of the benefit), the cardioprotective effects were largely independent of baseline body fat percentage and weight loss.
|
|
Measuring Real ROI From Ambient AI
In a groundbreaking report, Abridge shares a new methodology for measuring the ROI impact of ambient AI at a more granular level than ever before. Abridge partners are now getting a precise look at changes in wRVUs, HCC capture, time spent in notes, and much more. Four health systems also share their results across many of these metrics. To learn more about the technology behind this breakthrough and to see some of the data, download the report here.
|
|
Are You Invisible on AI Search?
Your next patient is asking ChatGPT. But unless your healthcare company has content that answers their questions, you won’t show up. Tely AI fixes this by analyzing your niche, identifying what patients and partners look for, and publishing expert-level content to make you visible on Google AI, ChatGPT, and Perplexity. Launch your AI agent in 5 minutes and get your first article on us.
|
|
- Scale Remote Patient Monitoring With BPM Pro 2: BPM Pro 2 is the next generation cellular blood pressure monitors, empowering care teams to scale remote patient monitoring and streamline operations. Discover why leading providers are choosing BPM Pro 2 to collect highly precise measurements and enrich data with Patient Insights from their daily lives.
- Under the Hood of Navina’s AI: Navina’s AI engine harnesses over 600 proprietary algorithms to transform fragmented patient data into actionable clinical intelligence at the point of care. It’s shaped with the expertise of physicians to turn multiple data sources (EHR, HIE, claims, care gap files, etc.) into contextualized insights like suspected conditions or evidence for care gap closures – each linked back to the original source. Download the whitepaper to see examples of Navina’s AI in action.
|
|
|
|
|