Anthropic and OpenAI Set Sights on Providers

Digital health has some fresh competition. Less than a week after OpenAI launched ChatGPT Health, Anthropic crashed the party with the grand debut of Claude for Healthcare

Player 2 has entered the fight. Anthropic’s headlining feature for consumers is identical to ChatGPT Health – the answers are grounded in the patient’s own medical history.

  • Claude for Healthcare lets patients securely upload their health records and app data to unlock the same wide-ranging benefits as ChatGPT Health, such as spotting trends, preparing for visits, interpreting lab results… so on and so forth.
  • The two even share some overlapping partner apps like Function and Apple Health, but the similarities end there. 

Claude for Healthcare gets providers in on the action. Unlike OpenAI’s shiny new patient-facing solution, Claude for Healthcare comes with a suite of “Connectors” that enable it to support previously out-of-reach workflows. The list includes:

  • Prior auth reviews and coverage verifications [CMS Coverage Database]
  • Medical coding and billing accuracy [ICD-10]
  • Provider verification and credentialing [NPI Registry]

OpenAI hasn’t taken any days off. It followed up last week’s big ChatGPT Health news with the launch of ChatGPT for Healthcare – similar names, very different products.

  • ChatGPT for Healthcare is OpenAI’s enterprise solution to the Anthropic problem. It brings new provider-facing capabilities like care path management, referral letter generation, and clinical search (tough break for Doximity and Wolters Kluwer).

The fun doesn’t end there. OpenAI added to its hot streak by picking up Torch, a four-person startup building “a medical memory for AI.” The Information pinned the price tag at $100M. 

  • Torch feeds scattered records into a context engine that connects the dots between visit notes, lab results, wearable data, and any other medical info you can think of. 
  • That pitch rhymes perfectly with ChatGPT Health’s value prop, and the Torch team will now be helping boost the new solution’s medical memory across its inaugural cohort of partner apps.

The Takeaway

What a week for our little corner of the industry. OpenAI and Anthropic are diving in head first, and their tech, ambition, and pockets might even be deeper than the choppy legal waters.

OpenAI Jumps Into Healthcare Arena With ChatGPT Health

If OpenAI wasn’t already a major healthcare player, the launch of ChatGPT Health definitely just made it one.

It’s the gamechanger everyone saw coming. OpenAI even teed up the launch with a report showing that 40M people are already using ChatGPT for healthcare advice on a daily basis. 

ChatGPT Health is about to take that a massive step further. 

Here’s a look at the core features:

  • ChatGPT Health operates inside a dedicated health environment with additional privacy layers (conversations aren’t used for model training, optional two-factor authentication).
  • Users can securely upload their complete medical records (courtesy of b.well).
  • Users can connect apps to inform answers (Apple Health, Function, MyFitnessPal).
  • The model uses longitudinal health data, labs, and visit summaries to help spot trends.

OpenAI is moving beyond general health advice. The extra clinical context gives ChatGPT Health the ability to give better answers at scale, and that’s good news for patients.

A few of the most obvious benefits for patients include:

  • Empowering them to take a more active role in their care.
  • Helping them uncover trends in their overall health.
  • Reducing confusion around test results.
  • Reinforcing care plans between visits.
  • The list could go on for a while.

ChatGPT Health isn’t actually HIPAA compliant. Then again, it doesn’t need to be.

  • Consumer health apps like ChatGPT Health aren’t covered by HIPAA, and to OpenAI’s credit it appears to have done a great job with the necessary disclaimers.
  • The dedicated health environment was also developed with input from 260+ physicians, and it leverages a physician-authored framework for safety, clarity, and escalation.

The question now is, who’s accountable when things go wrong? Millions of patients are about to start showing up to visits armed with advice from ChatGPT Health, which means its AI fingerprints will be all over their questions, concerns, and even clinical decisions. The tech might be ready. The governance isn’t.

  • When ChatGPT Health mentions an unproven treatment and a patient follows through, or interprets a worrying lab value as benign, who carries the liability?
  • OpenAI? The physicians who authored the safety framework? The patient who followed the advice? It’s tough to say, but providers – and their patients – still need a clear answer.

The Takeaway

Everyone wants a doctor in their pocket, and ChatGPT Health just filled that role for millions of patients… even if OpenAI explicitly told them it wasn’t up for the job.

Get the top digital health stories right in your inbox