If OpenAI wasn’t already a major healthcare player, the launch of ChatGPT Health definitely just made it one.
It’s the gamechanger everyone saw coming. OpenAI even teed up the launch with a report showing that 40M people are already using ChatGPT for healthcare advice on a daily basis.
ChatGPT Health is about to take that a massive step further.
Here’s a look at the core features:
- ChatGPT Health operates inside a dedicated health environment with additional privacy layers (conversations aren’t used for model training, optional two-factor authentication).
- Users can securely upload their complete medical records (courtesy of b.well).
- Users can connect apps to inform answers (Apple Health, Function, MyFitnessPal).
- The model uses longitudinal health data, labs, and visit summaries to help spot trends.
OpenAI is moving beyond general health advice. The extra clinical context gives ChatGPT Health the ability to give better answers at scale, and that’s good news for patients.
A few of the most obvious benefits for patients include:
- Empowering them to take a more active role in their care.
- Helping them uncover trends in their overall health.
- Reducing confusion around test results.
- Reinforcing care plans between visits.
- The list could go on for a while.
ChatGPT Health isn’t actually HIPAA compliant. Then again, it doesn’t need to be.
- Consumer health apps like ChatGPT Health aren’t covered by HIPAA, and to OpenAI’s credit it appears to have done a great job with the necessary disclaimers.
- The dedicated health environment was also developed with input from 260+ physicians, and it leverages a physician-authored framework for safety, clarity, and escalation.
The question now is, who’s accountable when things go wrong? Millions of patients are about to start showing up to visits armed with advice from ChatGPT Health, which means its AI fingerprints will be all over their questions, concerns, and even clinical decisions. The tech might be ready. The governance isn’t.
- When ChatGPT Health mentions an unproven treatment and a patient follows through, or interprets a worrying lab value as benign, who carries the liability?
- OpenAI? The physicians who authored the safety framework? The patient who followed the advice? It’s tough to say, but providers – and their patients – still need a clear answer.
The Takeaway
Everyone wants a doctor in their pocket, and ChatGPT Health just filled that role for millions of patients… even if OpenAI explicitly told them it wasn’t up for the job.
