OpenAI launches ChatGPT Health for private, personal health conversations
OpenAI launches ChatGPT Health for private, personal health conversations
OpenAI has announced the launch of “ChatGPT Health,” a dedicated space inside ChatGPT that lets you discuss health and wellness questions while optionally grounding answers in your own data, such as medical records, labs, and information from selected wellness apps.
ChatGPT Health arrives with two big claims: that health is already one of the platform’s dominant use cases, and that sensitive health chats need stronger compartmentalization than a normal chatbot thread. OpenAI says more than 230 million people ask health and wellness questions on ChatGPT each week, based on a de-identified analysis of conversations.
Note that OpenAI applied an architectural shift for this dedicated space with separate memory, separate data boundaries, and a connector model designed to pull in your own context-so the bot can explain your lab report, not a generic one.
Attention though, OpenAI explicitly says ChatGPT Health is not intended for diagnosis or treatment, and especially not for mental healthcare.
I had a look on what this product offers, and what it doesn’t (and shouldn’t) offer, after all the tool acts more like an AI doctor than a real doctor with all of the risks included.
- What OpenAI actually launched with ChatGPT Health
- The core design move: a one-way privacy wall
- Are Conversations in Health used to train the OpenAI foundation models?
- Who gets access first (and who doesn’t)
- What are the risks when using ChatGPT Health?
- A practical way to use ChatGPT Health without letting it use you
- What the launch of ChatGPT Health signals
- ChatGPT Health FAQ
- What is “ChatGPT Health,”?
- Is “ChatGPT Health” meant to diagnose or treat?
- Who can access it right now?
- Where do you find it in ChatGPT?
- What can you connect inside Health?
- Are Medical Records available everywhere?
- Does Apple Health require iOS?
- Do you need to re-authorize apps inside Health?
- Are third-party apps enabled by default in Health?
- Can third-party apps read your Health chats or files automatically?
- Does Health have separate memory from your normal chats?
- Can your normal chats access Health data?
- Can Health use context from your non-Health chats?
- Is your Health data used to train OpenAI’s foundation models?
- Is “ChatGPT Health” end-to-end encrypted?
- What does “encrypted in transit and at rest” actually mean?
- What does OpenAI mean by “purpose-built encryption and isolation”?
- Who powers the Medical Records connection?
- How do you disconnect Medical Records or apps?
- What happens when you disconnect Medical Records?
- Is Health covered by HIPAA?
- Could OpenAI be required to hand over Health data?
- Become a Sponsor
What OpenAI actually launched with ChatGPT Health
OpenAI describes “ChatGPT Health” as a new, dedicated experience you select from the ChatGPT sidebar. It lives in its own space, with its own memory system and storage separation for health chats, uploaded files, and connected apps.
OpenAI’s examples for ChatGPT Health focus on everyday “navigation” tasks:
- summarize recent bloodwork before a doctor visit
- explain test results in plain language
- prepare questions for an appointment
- compare insurance tradeoffs based on your healthcare patterns
- use connected nutrition/fitness data to discuss diet and workouts
You can also use typical ChatGPT capabilities such as file/photo uploads, voice mode, dictation, search, and deep research inside the ChatGPT Health space, with responses referencing connected health information when relevant.
The core design move: a one-way privacy wall
OpenAI’s most concrete privacy claim is the boundary between “ChatGPT Health” and the rest of ChatGPT.
Inside ChatGPT Health
- ChatGPT Health chats, connected apps, and uploaded health files are stored separately.
- ChatGPT Health has separate memories to keep health context contained.
Between ChatGPT Health and regular ChatGPT
- ChatGPT Health information and memories do not flow back into non-Health chats.
- Chats outside ChatGPT Health cannot access Health files, conversations, or Health memories.
OpenAI also says ChatGPT Health may use non-Health context (example: a recent move or lifestyle change) to make a Health conversation more relevant. The direction is intentionally asymmetric: regular-life context may flow in; health context does not flow out.
For users, that boundary will matter because it changes how “memory” feels. If you tell normal ChatGPT something health-sensitive, it can become entangled with the rest of your usage history. OpenAI adds that if you start a health-related conversation outside ChatGPT Health, ChatGPT will suggest moving it into Health for additional protections.
Are Conversations in Health used to train the OpenAI foundation models?
OpenAI states that conversations and files in ChatGPT are encrypted at rest and in transit, and that Health adds “purpose-built encryption and isolation” to compartmentalize health conversations. Important to know, the encryption is not end-to-end, so theoretically OpenAI or a service provider can decrypt data on their systems at some point.
The most attention-grabbing line is the training policy though: “Conversations in Health are not used to train our foundation models.”
OpenAI also points to account-level controls such as multi-factor authentication (MFA) as an additional protection against unauthorized access.
Connecting medical records: why b.well shows up in this story
The moment you let a consumer AI tool touch medical records, integration becomes the product. OpenAI says it partnered with b.well to enable access to “trusted U.S. healthcare providers.”
b.well says its network supports consumer-mediated access across more than 2.2 million providers and 320 health plans, labs, and other sources, using standards such as FHIR-based APIs, consent management, and auditability.
Good to know, OpenAI says you can remove access to medical records at any time via Settings.
The wellness app stack: what’s in, what’s constrained
OpenAI’s launch list mixes medical records with lifestyle apps:
- Medical Records (lab results, visit summaries, clinical history)
- Apple Health (movement, sleep, activity patterns)
- Function (lab test insights, nutrition ideas)
- MyFitnessPal (macros, recipes, nutrition guidance)
- Weight Watchers (including GLP-1 meal ideas and guidance)
- AllTrails, Instacart, Peloton
Two constraints are explicit:
- Medical record integrations and some apps are U.S.-only at launch.
- Connecting Apple Health requires iOS, obviously.
OpenAI also adds a governance layer around app access: apps can only connect to your health data with explicit permission, even if you already connected the same app to ChatGPT outside Health, and OpenAI says Health apps must meet additional privacy/security requirements and review.
Physician involvement and the “HealthBench” angle
OpenAI’s argument for safety leans on a human feedback pipeline and a formal evaluation benchmark. OpenAI says it collaborated over two years with more than 260 physicians across 60 countries, who provided feedback on model outputs over 600,000 times across 30 areas of focus.
It also says the model that powers Health is evaluated against clinical standards using “HealthBench,” a framework OpenAI previously released in May 2025.
OpenAI’s “HealthBench” page describes:
- 5,000 simulated, multi-turn, multilingual health conversations
- a mix of synthetic generation and human adversarial testing
- physician-written rubrics that grade model responses on what to include/avoid, weighted by clinical judgment
Who gets access first (and who doesn’t)
OpenAI is rolling out “ChatGPT Health” through a waitlist and a limited early cohort. Eligibility in the announcement is specific:
- early access starts with a small group of users on Free, Go, Plus, and Pro plans
- the cohort is outside the European Economic Area, Switzerland, and the United Kingdom for now.
Weirdly enough the non eligible users get to see a rather poetic 404 error page as you can see below.

OpenAI plans to expand access and make Health available to all users on web and iOS “in the coming weeks,” but it does not provide a dated rollout schedule beyond that phrasing.
What are the risks when using ChatGPT Health?
A health feature is mainly judged by failure modes. Three issues sit under every “AI + personal health data” launch:
1) Over-trust: A fluent answer can read like clinical confidence, even when the system is operating with incomplete context or making an error. OpenAI tries to address this by positioning Health as support for appointments and understanding patterns, not diagnosis or treatment.
2) Data sensitivity and legal boundaries: OpenAI emphasizes compartmentalization and training limits, but consumer health tools do not automatically inherit the compliance model of a hospital portal. The HIPAA compliance does not simply “apply” because the product is consumer-facing, legal access to data can still arise in some circumstances.
3) Security history and scale: OpenAI’s own product history includes security incidents (for example, a March 2023 issue involving exposure of some user information), which becomes more consequential when the data category shifts from ordinary chats to medical records.
None of these points negate the utility, but they do need some serious attention.
A practical way to use ChatGPT Health without letting it use you
OpenAI wants Health to function as a navigation layer by helping you translate, summarize, and prepare. That does suggests a simple discipline for users:
- Treat summaries as draft notes for a clinician, not conclusions.
- Ask the system to quote back the exact values it is using from your records, then compare them to the source document. (You can do this without asking for diagnosis.)
- Keep a list of “appointment questions” generated from your labs or visit summary, then validate each with your doctor.
- Use the separation feature deliberately: if something is health-sensitive, keep it inside Health so it stays within the one-way boundary OpenAI describes.
What the launch of ChatGPT Health signals
“ChatGPT Health” is OpenAI’s clearest attempt to separate “general assistant” from “sensitive domain assistant” without forcing users into a clinical platform. It combines three trends that have been moving in parallel: fragmented personal health data, consumer-grade AI as a default interface, and a growing demand for tighter privacy boundaries inside AI products.
Whether it succeeds will depend less on the headline features than on the boring parts: consent flows, auditability, connector security, and how reliably the system pushes users back toward clinicians when the situation crosses from “help me understand” into “help me decide.”
ChatGPT Health FAQ
What is “ChatGPT Health,”?
“ChatGPT Health” is a dedicated space inside ChatGPT for health and wellness conversations, with optional connections to medical records and wellness apps so answers can reference your data.
Is “ChatGPT Health” meant to diagnose or treat?
No. OpenAI states it supports, not replaces, medical care and is not intended for diagnosis or treatment.
Who can access it right now?
OpenAI’s initial rollout targets a small early group outside the European Economic Area (EEA), Switzerland, and the UK, across Free, Go, Plus, and Pro plans.
Where do you find it in ChatGPT?
You access it by selecting “Health” from the ChatGPT sidebar (once enabled for your account).
What can you connect inside Health?
At launch, OpenAI lists Medical Records (EHR), Apple Health, and supported third-party wellness apps (including Peloton, MyFitnessPal, Function, Instacart, AllTrails, and Weight Watchers).
Are Medical Records available everywhere?
No. OpenAI’s Help Center states Medical Records (EHR) is U.S.-only at launch and requires being over 18.
Does Apple Health require iOS?
Yes. OpenAI states Apple Health syncing requires iOS (an iPhone).
Yes. If you connected an app in normal ChatGPT, OpenAI says you must grant permission again inside Health.
Are third-party apps enabled by default in Health?
No. OpenAI states third-party apps are off by default unless you explicitly turn them on.
Can third-party apps read your Health chats or files automatically?
No. OpenAI says apps can’t access your Health chats, memories, or files unless you explicitly enable that app in Health; once enabled, an app may receive relevant context needed to complete your request.
Does Health have separate memory from your normal chats?
Yes. OpenAI states Health uses Health-specific memories and keeps Health chats/files/memories separate from the rest of ChatGPT.
Can your normal chats access Health data?
No. OpenAI states conversations outside Health can’t access Health conversations, files, or memories, and Health information does not flow back into main chats.
Can Health use context from your non-Health chats?
Yes. OpenAI says Health may use context from your main chats to make a Health conversation more relevant, but Health data doesn’t flow back.
Is your Health data used to train OpenAI’s foundation models?
OpenAI says Health chats, files, and memories are not used to train its foundation models.
Is “ChatGPT Health” end-to-end encrypted?
No. OpenAI described layered encryption for Health but “not end-to-end encryption.”
What does “encrypted in transit and at rest” actually mean?
It means data is protected while moving between your device and OpenAI’s systems (in transit) and while stored on servers (at rest). It does not mean only you hold the keys; “not end-to-end encrypted” implies the service can decrypt data within its systems under its access controls.
What does OpenAI mean by “purpose-built encryption and isolation”?
OpenAI claims Health adds extra encryption and isolation designed to compartmentalize health conversations beyond standard ChatGPT protections. Public materials do not spell out the exact implementation details (key design, storage boundaries, or access pathways).
Who powers the Medical Records connection?
OpenAI says it partners with b.well for access to trusted U.S. healthcare providers in the Medical Records feature.
How do you disconnect Medical Records or apps?
You disconnect apps and medical records via Settings > Apps, and you can delete Health memories via Settings > Personalization (or within Health memory controls).
What happens when you disconnect Medical Records?
OpenAI states Health stops accessing that source going forward, and your medical records are deleted from its third-party partner b.well.
Is Health covered by HIPAA?
OpenAI’s health lead said HIPAA doesn’t apply in this consumer-product setting (HIPAA typically applies to clinical/professional healthcare contexts).
Could OpenAI be required to hand over Health data?
OpenAI said it would provide access where required through valid legal processes or emergency situations.
Become a Sponsor
Our website is the heart of the mission of WINSS – it’s where we share updates, publish research, highlight community impact, and connect with supporters around the world. To keep this essential platform running, updated, and accessible, we rely on the generosity of you, who believe in our work.
We offer the option to sponsor monthly, or just once choosing the amount of your choice. If you run a company, please contact us via info@winssolutions.org.
I specialize in sustainability education, curriculum co-creation, and early-stage project strategy. At WINSS, I craft articles on sustainability, transformative AI, and related topics. When I’m not writing, you’ll find me chasing the perfect sushi roll, exploring cities around the globe, or unwinding with my dog Puffy — the world’s most loyal sidekick.
