Wheel Reveals How Patients Actually Engage With Virtual Care

Wheel just published its third annual Virtual Care Horizons report, and it was overflowing with great data on where telehealth is scaling – and which models are built to last.

Virtual care is all grown up. Adoption has stabilized, competition has intensified, and success is no longer driven by first visits. It’s driven by continuity.

  • That means that scale alone is no longer the headline. Data from 1.4M visits on Wheel’s Horizon telehealth platform shows that sustained engagement is the new unit of growth.

New year, new trends. Wheel answered a trio of important questions using real behavioral data from its users rather than the usual survey approach.

  • How do patients actually move from entry points to ongoing care?
  • Where do they stack services?
  • Which models support durable engagement over time?

Women’s health is the new growth engine. In 2025, women’s health became the largest service category on the platform, accounting for about 50% of all visits.

  • One in three new customer launches or expansions focused on this vertical, which seems like a strong signal that this is a long-term category strength rather than a one-off volume spike.
  • Women’s health offers a uniquely durable foundation for virtual care. It creates demand for a wide range of high-frequency touchpoints early in life (sexual and reproductive health), then naturally extends into more complex, longitudinal needs (menopause support).

Weight management is the clinical anchor. It’s the connective tissue between hormonal, metabolic, and chronic care, effectively converting single-point entry into integrated care models.

  • Patients entering through this vertical are more likely to stack services ranging from follow-up care to diagnostics. As GLP-1s went mainstream last year, Wheel saw a 263% surge in demand for downstream chronic condition and preventive care visits.

Virtual care is the operating layer. It’s evolved from an access channel to the infrastructure for multi-condition management, and the retention metrics look a lot different than the days of transactional telehealth.

  • 69% of Wheel-powered visits were created by returning patients in 2025. 
  • 70% of patients engaged in chronic and preventive care programs return for 2+ visits.
  • 58% of women’s health patients return for a second visit or beyond.

The Takeaway

Most vendor reports read like a billboard for their services, but Wheel just gave us one of the best recent snapshots of how patients are actually engaging with virtual care.

Bessemer Venture Partners State of Health AI

Bessemer Venture Partners’ always-stellar State of Healthcare AI report did a great job explaining why we (probably) aren’t in a bubble even though the health AI rocket has hit escape velocity.

AI is more than hype. BVP points to signals from the private markets to make its case. 

M&A activity is surging. Global health tech M&A reached 400 deals in 2025 (up from 350 in 2024), but the strategic rationale matters more than the volume. Healthcare orgs and investors recognize that AI simultaneously drives revenue growth and margin improvement.

  • Prime example: the Smarter Technologies roll up was designed to leverage Thoughtful and SmarterDx’s growth engine and clinical AI platform to drive margin expansion across the Access Healthcare RCM services conglomerate.

VC funding is nearly back to pandemic levels. BVP counted 527 venture deals in 2025 (~$14B total), with the average round size climbing 42% to $29M.

  • AI startups captured 55% of that, up from 37% in 2024. Even more importantly, for every $1 invested in AI companies overall, $0.22 was deployed to healthcare AI startups, outpacing the fair share of 18% of GDP that healthcare spending represents in the U.S.

The question now is, are we in a bubble? BVP has a nuanced answer for why health AI is in a better spot than the Dot Com Bubble.

  • First, AI’s technological shift has spurred the invention of new business models, with the emergence of “AI-services-as-software” companies delivering service-level outcomes (human-quality work) with software-level margins (70%+ gross margins).
  • Second, buyers are now pulling instead of being pushed. While EHRs took 15 years to scale, AI scribes have pulled it off in three. Demonstrable ROI and ease of implementation were key here.

Health AI has an X Factor. New health AI “supernova” startups are bending traditional growth curves entirely. BVP attributes these supernovas’ unprecedented growth to four X Factors.

  • Continuous hyper-growth velocity (not just growth projections)
  • Revenue durability through defensibility
  • Productivity gains that translate to better margins and full-time employee metrics at scale
  • Point solution to platform expansion

Maybe sane valuations, maybe VC mental gymnastics. BVP argues that a supernova with $30M ARR and $1B valuation isn’t overvalued, it has fundamentally different growth dynamics.

  • When you’re growing 6x instead of 2x, you reach $100M ARR in 18 months instead of 36+ months. That compression in time-to-scale commands a premium, and BVP says a 7x revenue multiple for supernovas is justified versus 2-3x for a strong SaaS company.

The Takeaway

Health AI is going supernova, and the explosion might actually be big enough to let the leaders grow into their astronomical valuations.

AI Spots Early Cognitive Decline in Clinical Notes

Early disease detection is entering the AI era, and a new study in npj Digital Medicine shows that autonomous agents can now flag cognitive decline using nothing but clinical notes.

Cognitive decline is difficult to detect. It remains significantly underdiagnosed in routine care, and traditional screening usually requires a dedicated clinician and tests that can take hours. 

  • At the same time, early detection is becoming increasingly important, especially with the recent approval of Alzheimer’s therapies that are most effective when administered early. 

Mass General Brigham might have an answer. Clinical notes contain whispers of cognitive decline that busy clinicians can’t always hear. MGB built a system that listens at scale.

  • These whispers include everything from linguistic shifts and sentence pauses to disorganized narratives and family member concerns. 
  • MGB developed an AI system that scans for these signals in routine clinical documentation, leveraging five specialized agents that critique each other and refine their reasoning.

It worked like a charm. The MGB researchers set their agents loose on over 3,300 clinical notes from 200 anonymized patients, then had human reviewers take their own look.

  • The agents detected cognitive impairment with 91% sensitivity, nearly matching expert-level accuracy – without any human intervention needed after deployment.
  • When the AI and human reviewers disagreed, an independent expert validated the AI’s reasoning 58% of the time – meaning the system was often making sound clinical judgments that initial human review had missed.

The cherry on top? The MGB team open-sourced Pythia alongside the study, enabling any provider org to deploy autonomous prompt optimization for their own AI screening applications.

The Takeaway

LLMs have opened the door to proactive screening at scale, and MGB just provided an excellent proof of concept using AI agents that turn everyday documentation into a chance to catch cognitive decline during the optimal treatment window.

ARISE Maps the State of Clinical AI

There have probably been hundreds of reports on the medical AI landscape, but there’s only been one State of Clinical AI from the rockstar team at ARISE.

The AI opus delivers the most complete review we’ve seen of a field that’s moving faster than its evaluation practices. It looked at the most influential clinical AI studies from 2025 to answer a trio of important questions:

  • Where does AI meaningfully improve care once it leaves research settings?
  • Where does performance break down?
  • Where do risks remain underexamined?

ARISE brought the heat. The Stanford-Harvard research network produced more highlights than we could count, but here’s a roundup of some of our favorites.

Impressive results in narrow evaluations. AI models have shown “superhuman performance” in research settings, but these results often depend on how narrowly the problem is framed. 

  • In one study, researchers modified standard medical multiple-choice questions so that the correct answer became “none of the other answers.” The clinical reasoning required to solve the question didn’t change. Model performance did. Accuracy dropped sharply across leading AI models, in some cases by over a third.

AI clearly helps prediction at scale. Although diagnostic reasoning was a mixed bag, several studies demonstrated that AI excels at identifying early warning signals from large datasets.

  • A hospital-based study found that a model trained on continuous wearable vital signs predicted patient deterioration up to 24 hours before standard alerts, identifying patients at risk for ICU transfer, cardiac arrest, or death while there was still time to intervene.

Most studies still don’t resemble the reality of healthcare. Clinical work has little to do with answering exam questions, and much to do with reviewing charts, coordinating care, and deciding when not to intervene.

  • A review of 500+ studies found that nearly half of them tested models using medical exam-style questions. Only 5% used real patient data, very few measured whether the models recognized uncertainty, and even fewer examined bias or fairness.

Now what? ARISE offered a few focus areas for 2026 that hit the center of the bullseye for building trust in the latest AI models.  

  • Evaluate models using real-world scenarios to drive evidence-based medicine.
  • Prioritize human-computer interaction design as much as primary outcomes.
  • Measure uncertainty, bias, and harm – especially when it comes to patient-facing AI.

The Takeaway

Healthcare AI has arrived, and ARISE made it clear that innovation won’t be driven by newer models alone. It will depend on whether health systems, researchers, and regulators are willing to apply the same evidence standards to AI that they expect out of any other clinical solution.

Anthropic and OpenAI Set Sights on Providers

Digital health has some fresh competition. Less than a week after OpenAI launched ChatGPT Health, Anthropic crashed the party with the grand debut of Claude for Healthcare

Player 2 has entered the fight. Anthropic’s headlining feature for consumers is identical to ChatGPT Health – the answers are grounded in the patient’s own medical history.

  • Claude for Healthcare lets patients securely upload their health records and app data to unlock the same wide-ranging benefits as ChatGPT Health, such as spotting trends, preparing for visits, interpreting lab results… so on and so forth.
  • The two even share some overlapping partner apps like Function and Apple Health, but the similarities end there. 

Claude for Healthcare gets providers in on the action. Unlike OpenAI’s shiny new patient-facing solution, Claude for Healthcare comes with a suite of “Connectors” that enable it to support previously out-of-reach workflows. The list includes:

  • Prior auth reviews and coverage verifications [CMS Coverage Database]
  • Medical coding and billing accuracy [ICD-10]
  • Provider verification and credentialing [NPI Registry]

OpenAI hasn’t taken any days off. It followed up last week’s big ChatGPT Health news with the launch of ChatGPT for Healthcare – similar names, very different products.

  • ChatGPT for Healthcare is OpenAI’s enterprise solution to the Anthropic problem. It brings new provider-facing capabilities like care path management, referral letter generation, and clinical search (tough break for Doximity and Wolters Kluwer).

The fun doesn’t end there. OpenAI added to its hot streak by picking up Torch, a four-person startup building “a medical memory for AI.” The Information pinned the price tag at $100M. 

  • Torch feeds scattered records into a context engine that connects the dots between visit notes, lab results, wearable data, and any other medical info you can think of. 
  • That pitch rhymes perfectly with ChatGPT Health’s value prop, and the Torch team will now be helping boost the new solution’s medical memory across its inaugural cohort of partner apps.

The Takeaway

What a week for our little corner of the industry. OpenAI and Anthropic are diving in head first, and their tech, ambition, and pockets might even be deeper than the choppy legal waters.

Foundation Models Can Compromise Patient Privacy

Foundation models trained on EHR data hold massive potential for clinical applications, but a new study out of MIT shows that they might have just as much potential to violate patient privacy.

Generalized knowledge makes better predictions. EHR foundation models normally draw on a collection of de-identified patient records to produce their outputs.

  • That’s not a problem on its own, but unintended “memorization” also allows these models to serve answers based on a single record from their training data. 

Therein lies the problem. To quantify the risk of these models revealing sensitive information, MIT researchers developed structured tests to determine how easily an attacker with partial knowledge of a patient – think lab results or demographic details – could extract further identifiable info through targeted prompts.

The tests measured memorization as a function of: 

  • the amount of information an attacker needs to reveal information
  • the risk associated with the revealed information

What did they find? After validating the tests using EHRMamba, an EHR foundation model with publicly available training data, the researchers reached a pair of conclusions that weren’t too surprising to see.

  • The more information attackers have on a patient, the greater their privacy risk.
  • Some patients, particularly those with rare conditions, are more susceptible.

Not all information is harmful. The researchers found that some details, such as a patient’s age or gender, present a relatively lower risk in the event of a data breach. 

  • This info wasn’t very helpful in targeted prompts that probed the model for memorized records, and it isn’t very damaging if the answers reveal it.
  • Other info, such as a rare disease diagnosis, was flagged as significantly more harmful. It posed a higher risk of getting the model to expose patient-specific details (especially in combination with other identifiers), and it can be especially sensitive if revealed through probing.

The Takeaway

EHR foundation models need some degree of memorization to solve complex tasks, but memorizing and revealing patient records is obviously out of the question. The tradeoff between performance and privacy is an ongoing challenge, but MIT just delivered a framework for evaluating some of the risks that can help strike the right balance.

OpenAI Jumps Into Healthcare Arena With ChatGPT Health

If OpenAI wasn’t already a major healthcare player, the launch of ChatGPT Health definitely just made it one.

It’s the gamechanger everyone saw coming. OpenAI even teed up the launch with a report showing that 40M people are already using ChatGPT for healthcare advice on a daily basis. 

ChatGPT Health is about to take that a massive step further. 

Here’s a look at the core features:

  • ChatGPT Health operates inside a dedicated health environment with additional privacy layers (conversations aren’t used for model training, optional two-factor authentication).
  • Users can securely upload their complete medical records (courtesy of b.well).
  • Users can connect apps to inform answers (Apple Health, Function, MyFitnessPal).
  • The model uses longitudinal health data, labs, and visit summaries to help spot trends.

OpenAI is moving beyond general health advice. The extra clinical context gives ChatGPT Health the ability to give better answers at scale, and that’s good news for patients.

A few of the most obvious benefits for patients include:

  • Empowering them to take a more active role in their care.
  • Helping them uncover trends in their overall health.
  • Reducing confusion around test results.
  • Reinforcing care plans between visits.
  • The list could go on for a while.

ChatGPT Health isn’t actually HIPAA compliant. Then again, it doesn’t need to be.

  • Consumer health apps like ChatGPT Health aren’t covered by HIPAA, and to OpenAI’s credit it appears to have done a great job with the necessary disclaimers.
  • The dedicated health environment was also developed with input from 260+ physicians, and it leverages a physician-authored framework for safety, clarity, and escalation.

The question now is, who’s accountable when things go wrong? Millions of patients are about to start showing up to visits armed with advice from ChatGPT Health, which means its AI fingerprints will be all over their questions, concerns, and even clinical decisions. The tech might be ready. The governance isn’t.

  • When ChatGPT Health mentions an unproven treatment and a patient follows through, or interprets a worrying lab value as benign, who carries the liability?
  • OpenAI? The physicians who authored the safety framework? The patient who followed the advice? It’s tough to say, but providers – and their patients – still need a clear answer.

The Takeaway

Everyone wants a doctor in their pocket, and ChatGPT Health just filled that role for millions of patients… even if OpenAI explicitly told them it wasn’t up for the job.

Crystal Ball Compilation: Digital Health in 2026 

Welcome back to the first Digital Health Wire of 2026! The healthcare industry doesn’t take any days off, but we hope our readers managed to catch a break over the holidays to recharge for the big things to come in the new year.

The past few weeks have had plenty of fortune tellers predicting what those big things will be, so we’re kicking off the year with a compilation of the clearest crystal balls.

Let’s get right into it.

CommonSpirit HealthFive Health Tech Predictions for 2026, Dr. Minal Shah

  • Favorite Forecast: In 2026, AI projects without strategic alignment are heading straight to the pilot graveyard. When organizations chase what’s possible instead of what’s strategic, they burn human capital on change efforts that never scale to real impact.
  • Big Idea: “Platform vs. point solution is a false dichotomy – and I think we’re asking the wrong question. The real question isn’t which approach to take. It’s whether we’ve done the hard work of understanding what the organization actually needs before we choose a path forward. That means moving from ‘what can we do with AI?’ to ‘what should we be doing with AI?'”

Out-of-PocketOut-Of-Pocket’s 2026 Predictions, Nikhil Krishnan

  • Favorite Forecast: Intellectual property lines will be drawn for AI. We’ve already seen a ton of legal battles around copyrights, but the dealmaking is just getting started.
  • Big Idea: “Healthcare has a TON of companies that have copyrights and IP ownership over critical parts of healthcare information. OpenEvidence for example has signed several agreements with medical societies, NEJM, etc. Who will the AMA partner with for CPT codes? Which companies will the EHRs partner with to license their data?”

Second OpinionHealthcare in 2026, Christina Farr & Annalisa Merelli

  • Favorite Forecast: The largest digital health companies will start flocking to CMS’ new ACCESS program to find a better business model in Medicare, while also duking it out for a slice of the available rural health funding. 
  • Big Idea: “There’s no question digital health companies will be the beneficiaries, particularly given that the executive and policymaker running Medicare – Chris Klomp – has an entrepreneurial background and formerly sat on the board of venture-backed Maven Clinic.”

Becker’sHow the AI conversation will change in 2026, Zachary Lipton

  • Favorite Forecast: Clinical decision support has been trapped in a frustrating middle zone for years: better than manually searching guidelines, but worse than talking to a specialist. CDS will finally start evolving beyond search with contextual awareness.
  • Big Idea: “This is the year CDS evolves past glamorized search. Next-generation CDS will reason jointly over medical literature, the patient’s record and current visit context, helping clinicians apply knowledge, not just retrieve it.”

Forbes 10 Healthcare Industry Predictions For 2026, Sachin Jain

  • Favorite Forecast: Healthcare’s AI revolution will hit speed bumps. While AI has shown considerable promise for relatively straightforward uses like ambient dictation, its application in other domains will be more challenged and problematic.
  • Big Idea: “Agentic AI holds significant promise, but legacy operators will be slow to change deeply ingrained processes, values, and attitudes. AI snake-oil salespeople (fueled by venture capitalists chasing outsized returns) have flooded the zone, a phenomenon that is sure to fuel false starts and threaten the pace and depth of true organizational change in the short-run.”

Hospitalogy8 Predictions for Healthcare 2026, Blake Madden

  • Favorite Forecast: In 2026, enterprise buyers will start demanding consolidation. The operational model shifts from “best of breed for each use case” to “who can orchestrate AI across our entire administrative and clinical workflow?”
  • Big Idea: “This is where the Palantir playbook becomes relevant. The firm is already working with HCA and others to deploy AI infrastructure that spans clinical, operational, and financial domains. The value proposition isn’t any single algorithm. It’s the orchestration layer that ties disparate data sources into unified decision support.”

Notable Healthcare’s pivotal year for AI transformation, Dr. Aaron Neinstein

  • Favorite Forecast: New practices will be built from scratch around AI Agents to support panel sizes three to five times larger at equal or higher quality and dramatically lower cost. At the same time, human connection will take center stage again.
  • Big Idea: “AI will handle pattern analysis and routine adjustments, so clinicians can shift from memorizing facts to focusing on meaning… Because of this, nurses, MAs, and care coordinators will move up the value chain, as they can spend more time on empathy, clinical judgement, and complex situations rather than administrative tasks.”

The Takeaway

Healthcare still has its fair share of challenges, but it has just as many tailwinds pushing it toward new solutions. Cheers to everyone making those solutions a reality in the new year.

Get the top digital health stories right in your inbox