Why You Shouldn’t Trust AI Doctors (Yet)
Why You Shouldn’t Trust AI Doctors (Yet)
(Photo by Alexander Sinn) AI chatbots (‘AI doctors’) give inaccurate and inconsistent medical advice that could present risks to users. Thats the verdict of a study from the University of Oxford. Surprising? Not really, because there is a good reason why the adoption of AI in healthcare is cautious, and that’s because lives are at stake and regulations are strict.
As we reported already before, AI will augment healthcare workers rather than fully replace them. And more recently ChatGPT launched its Health product in a controlled environment. Where people used to google information to know more about their health, they now use AI tools en masse as ‘AI doctors’. AI chatbots and AI doctors (via apps for instance) answer your health questions in seconds, the accuracy is not really there. The results given by these ‘AI doctors’ sound calm, precise, and very sure of themselves, but that combination is exactly why you need to be careful.
Right now, AI is good enough to sound like a doctor, but still wrong often enough to put you at risk. And in medicine, “often” can mean a single missed stroke, heart attack, or cancer relapse.
In this article I will explain how accurate ‘AI doctors’ actually are, how symptom checkers compare to real physicians, where AI fails in real-world healthcare, and how to use AI safely for medical questions. However, the key rule should be: Always speak to a licensed clinician about your personal health concerns, even if an AI sounds pretty sure of itself.
- Why AI Doctors Feel So Trustworthy (Even When They’re Wrong)
- Is There A Difference Between Paid AIs And Free AIs When It Comes To Accuracy?
- How Accurate Are AI Doctors? What the Data Actually Shows
- AI Symptom Checkers vs Real Doctors: Who’s More Accurate?
- Is ChatGPT Safe for Medical Advice?
- Can AI Be Fooled by Medical Misinformation?
- Real-World Failures of AI in Healthcare
- The Hidden Risks of AI Medicine / AI Doctors Most People Don’t See
- When Should You NOT Use AI Doctors?
- How to Use AI Doctors for Health Questions Safely
- Are AI Doctors Safe to Trust?
Why AI Doctors Feel So Trustworthy (Even When They’re Wrong)
What do we understand under AI doctors? AI doctors are software systems that use artificial intelligence models (usually large language models and/or diagnostic algorithms) to give people medical-style information, such as symptom assessments, possible conditions, triage advice (self-care vs doctor vs ER), or treatment explanations, through a chat or app interface. They can mimic parts of what a human doctor does – explaining test results, listing likely causes, suggesting questions to ask your clinician – but they are not licensed healthcare professionals and do not replace in-person clinical examination, diagnosis, or treatment by a qualified doctor.
Millions of people now type symptoms into AI chatbots instead of calling a doctor or nurse. A recent study from the University of Oxford notes that large language model (LLM) chatbots are already being marketed to the public as tools to “support medical decisions,” and that health systems are actively exploring them for patient-facing use.
You feel drawn to these tools for predictable reasons. They respond instantly, at any time of day. They never appear irritated or rushed. They produce long, structured explanations when you ask follow-up questions. They do not react to your lifestyle, weight, or mental health history. Combined with headlines about AI models “passing medical exams,” this creates a strong impression of competence.
The interface also matters. A clean chat window with reassuring language and medical-sounding terminology feels more authoritative than scrolling through forum posts or ad-heavy health sites. When the same technology is embedded into respected brands (search engines, hospital portals, or health apps) the perceived trust level increases further, even when the underlying performance has not changed.
The problem is that perceived authority and actual safety are not the same thing. The Oxford user study shows that the moment real people start using these tools for their own symptoms, the apparent advantage of AI over normal web search disappears.
Is There A Difference Between Paid AIs And Free AIs When It Comes To Accuracy?
While there is often a real accuracy gap between paid and free AI tools, the key reason for this is not that you paid; it is that the paid tier usually unlocks a different, more capable model. Many platforms run an older or smaller model on the free plan and reserve their newest “flagship” model for subscribers. However, if both tiers use exactly the same underlying model, you can expect roughly the same accuracy. The moment the paid option gives you access to a newer, larger, better-trained system, you start to see fewer basic mistakes, better reasoning, and more consistent answers, so it looks as if paying “improved” the AI, while in reality you just switched models.
Across vendors, the pro or advanced model generally outperforms the free one on tasks that require multi-step reasoning, complex instructions, or precise control. That shows up in areas like writing structured content or formatting rules, handling long legal or technical documents, and solving code or math problems. Benchmark tests typically place these newer flagship models above the free-tier models on reasoning and knowledge tasks, which matches practical experience: the free tier is usually good enough for simple Q&A, while the paid tier is better suited for long, nuanced, or high-complexity work where errors cost time or money.
Price alone, however, does not remove the fundamental limitations of large language models. Paid AIs can (and will) still misunderstand vague prompts, state wrong facts with confidence, invent sources, or misinterpret data you paste in. They do not become perfectly up to date or medically, legally, or financially “safe” just because access costs a monthly fee. For anything high-stakes – whether it is about health decisions, legal strategy, financial planning, or safety-critical engineering – both free and paid systems should be treated as assistants that help you explore options and structure questions, and not as final authorities that replace domain experts.
But let’s dive into the data to check how well these AI doctors function.
How Accurate Are AI Doctors? What the Data Actually Shows
Across recent research, one pattern keeps repeating: AI looks strong on structured test questions, but real-world performance drops once you add human users, messy symptoms, and incomplete information.
AI Diagnostic Accuracy in Controlled Tests vs Real Life
In the Nature Medicine study from Oxford, researchers first tested three LLMs (including GPT-4-class systems) on 10 clinical scenarios under controlled conditions. When the models received complete, well-structured case descriptions, they:
- Identified the correct underlying condition in 94.9% of cases
- Chose an appropriate disposition (self-care, GP, or emergency care) in 56.3% of cases
The same scenarios were then given to 1,298 adults in the UK, who could either use AI tools, traditional web search, or official health websites to make decisions about the condition and the right action. When real people interacted with the AI:
- Correct condition chosen: 34.5%
- Correct action chosen: 44.2%
Performance with AI support was no better than performance with conventional internet search or official health websites.
You can think of it as two different accuracy layers:
| Setting | Who/What Is Tested | Key Result |
|---|---|---|
| Controlled model evaluation | LLM alone, full case description | 94.9% correct condition ID; 56.3% correct disposition |
| Real-world style user study | Members of the public using AI for 10 scenarios | 34.5% correct condition; 44.2% correct action |
Source: Oxford Internet Institute / Nature Medicine user trial.
The models can score near-perfectly on structured exam-style questions, but that performance does not automatically transfer to ordinary people querying an AI doctor about vague symptoms at home.
AI Triage Accuracy: Can It Tell You When to Go to the ER?
Triage accuracy matters more than perfect diagnosis for many users. The key question is simple: does this tool tell you to seek urgent care when you need it, and to stay home when you do not?
The Oxford study shows that, even in controlled testing, LLMs only selected the correct level of care in 56.3% of scenarios. With real users, appropriate action dropped to 44.2%, which again was similar to traditional internet search.
A broader systematic review of symptom checkers found that triage accuracy across tools ranged from 49% to 90%, depending on the app and condition set. In some studies, emergency cases were triaged more accurately than non-urgent problems; in others, emergencies were under-triaged, meaning people were incorrectly advised to wait or seek routine care.
A separate benchmarking study concluded that, on average, symptom checkers had no greater overall triage accuracy than lay users making their own decisions.
Taken together, current AI systems are not consistently reliable in answering the question users care about most: “Do I need urgent help?”
AI Medical Meta-Analysis Results (2024–2026)
Several meta-analyses have tried to summarize how well these ‘AI doctors’ performs across many different medical tasks and studies.
A 2024 systematic review and meta-analysis of ChatGPT in healthcare looked at 60 papers and pooled data from 17 studies. It found:
- Overall integrated accuracy: 56% (95% CI 51–60%) across diverse medical questions
A 2025 Nature Digital Medicine meta-analysis of 83 studies on generative AI for diagnostic tasks reported:
- Overall diagnostic accuracy: 52.1%
- No significant performance difference between AI systems and physicians once averaged across tasks
For symptom assessment apps, a 2025 review of self-triage tools found:
- Self-triage accuracy ranged from 11.5% for the weakest systems to 90.0% for the best-performing ones, depending on the app and scenario
A simple summary of these meta-findings:
| Tool type / analysis (2024–2026) | Metric | Result |
|---|---|---|
| ChatGPT in medical use (17 studies) | Integrated medical accuracy | 56% (95% CI 51–60%) |
| Generative AI for diagnosis (83 studies) | Diagnostic accuracy | 52.1% overall; no clear advantage over physicians |
| Symptom assessment apps (multiple studies) | Self-triage accuracy | 11.5–90.0%, depending on system and study |
These numbers show that AI tools have clear potential but do not yet meet the reliability that patients expect from a primary source of medical advice.
AI Symptom Checkers vs Real Doctors: Who’s More Accurate?
Digital symptom checkers appeared years before ChatGPT-style models and are now widely embedded in health websites and apps. Their performance has been tested directly against general practitioners using standardized clinical vignettes.
Diagnostic Accuracy: GP vs Symptom Checker Apps
A BMJ Open study led by Gilbert and colleagues compared eight popular symptom assessment apps to general practitioners (GPs) on 200 primary-care vignettes. It measured “top-3 suggestion accuracy”, that is how often the correct diagnosis appeared in the first three suggestions.
Results:
| System / Person | Top-3 Diagnosis Accuracy (M3) |
|---|---|
| GPs (average) | 82.1% |
| Ada | 70.5% |
| Buoy | 43.0% |
| K Health | 36.0% |
| Mediktor | 36.0% |
| Babylon | 32.0% |
| WebMD | 35.5% |
| Symptomate | 27.5% |
| Your.MD | 23.5% |
No app outperformed doctors. The best system in that study (Ada) still trailed GPs by more than 10 percentage points, and several widely used tools performed markedly worse.
The same review also noted that some apps declined to give any diagnosis for a substantial share of vignettes, which further complicates real-world use.
Triage Accuracy: Can ‘AI Doctors’ Decide the Right Level of Care?
The same BMJ Open work and later systematic reviews also assessed triage behaviour, meaning whether the tool recommended emergency care, urgent GP review, or self-care appropriately.
In Gilbert’s comparison:
- GPs provided safe urgency advice in 97.0% of cases (±2.5%)
- Only three apps (Ada, Babylon, Symptomate) reached safety levels within one standard deviation of the GPs, with safety metrics between 95.1% and 97.8% for the vignettes where they gave advice
- Another app (Your.MD) reached 92.6% safety, within two standard deviations of GPs
A broader systematic review found that overall triage accuracy for symptom checkers ranged from 49% to 90%, depending on the tool and clinical area, and highlighted large variation even when apps received identical input data.
Another study in the Journal of Medical Internet Research concluded that an “average symptom checker” had no greater triage accuracy than an average user working without the app.
For search terms like “AI symptom checker vs doctor” or “Are symptom checkers accurate?”, the evidence is consistent: symptom checkers can be helpful in some contexts, but their diagnostic accuracy lags behind physicians, and their triage decisions vary widely between tools.
Is ChatGPT Safe for Medical Advice?
ChatGPT’s Overall Accuracy in Medical Studies
The 2024 meta-analysis by Wei and colleagues evaluated ChatGPT’s performance across a range of medical tasks, including answering clinical questions, explaining conditions, and solving exam-style problems.
Key finding:
- Integrated accuracy: 56% (95% CI 51–60%) across 17 pooled studies
The authors stress that the underlying studies used different question sources, evaluation methods, and ChatGPT versions, which adds heterogeneity on top of this mid-range accuracy number.
In parallel, a meta-analysis of generative AI for diagnosis across 83 studies found very similar results: an overall accuracy of 52.1%, with no statistically significant difference between AI models and physicians once all tasks were pooled.
These results indicate that ChatGPT-class tools can often produce correct information, but they are not at the reliability level expected of standalone clinical decision-makers.
Why Real-World Use Is Much Less Reliable
The gap between exam scores and real-world safety comes from how people use AI.
In the Oxford Nature Medicine trial, LLMs themselves showed near-perfect diagnostic accuracy on the scenarios. Only when participants used the models as part of their own decision-making did performance drop to 34.5% correct diagnoses and 44.2% correct actions, aligning with traditional search approaches.
The study identified several reasons:
- Participants often provided limited or incomplete descriptions of symptoms.
- Some users mis-interpreted the AI’s phrasing or level of certainty.
- Occasionally, models produced mixed or misleading advice within the same answer.
A commentary on the study notes that, just as medicines require rigorous trials in real-world populations before routine use, LLM-based systems need careful evaluation as they are actually used by patients, not only on curated test sets.
In other words, ChatGPT and similar systems can be a useful information layer, but they are not safe as the sole basis for deciding what to do about symptoms.
Can AI Be Fooled by Medical Misinformation?
Study: AI Accepts Fake Medical Claims 47% of the Time
A 2026 study in The Lancet Digital Health examined how easily AI models accept and repeat false medical statements. Researchers at Mount Sinai tested 20 LLMs, feeding them:
- Real hospital discharge summaries with a single fabricated recommendation inserted
- Common health myths taken from Reddit
- Physician-written clinical scenarios containing false claims
The results showed a large gap based on how “official” the text looked:
- When misinformation was embedded in hospital-style discharge notes, models accepted the false claims nearly 47% of the time.
- When the same myths came from Reddit, acceptance dropped to about 9%.
Some models were more robust than others, but susceptibility to well-formatted misinformation reached as high as 63.6% in the weakest systems.
Why Official-Looking Misinformation Is Especially Dangerous
Modern LLMs are trained to treat well-structured, authoritative-sounding text as more trustworthy. They do not have direct access to ground truth; they infer credibility from style and context.
In practice, that means:
- A false recommendation inside what looks like a real discharge summary is far more likely to be repeated.
- A casually written myth in an internet forum is more likely to be flagged or corrected.
As hospitals and software vendors start using AI to draft notes, summaries, and patient instructions, any error that enters those documents can be reinforced when the ‘AI doctors’ later treat them as a reliable source. The Lancet study shows that formatting alone can tip many models toward accepting or rejecting a medical claim.
For a patient, there is no visible warning when an AI answer is based on a fabricated but professional-looking statement.
Real-World Failures of AI in Healthcare
IBM Watson for Oncology: A Case Study in Overpromising AI
IBM’s “Watson for Oncology” was marketed as a flagship example of AI in cancer care. Internal IBM documents obtained by STAT News told a different story.
Those documents showed that Watson for Oncology produced “unsafe and incorrect” cancer treatment recommendations, including advice that conflicted with clinical guidelines, according to IBM clinicians and hospital partners who tested the system.
Hospitals that piloted the technology reduced or discontinued its use. Commentaries in oncology journals and trade publications now frequently cite Watson for Oncology as an example of how high expectations for AI can collide with the complexities of real-world cancer treatment.
AI-Guided Surgery and Reported Injuries
AI is also being embedded into surgical navigation systems. One widely discussed case involves the TruDi Navigation System, initially marketed by a Johnson & Johnson subsidiary and later acquired by Integra LifeSciences.
The device is used in sinus surgery to show surgeons where their instruments are inside the skull. After AI-based software updates were introduced, US FDA databases recorded:
- At least 100 adverse events related to the device between late 2021 and 2025
- At least 10 patients with serious injuries, including:
- Cerebrospinal fluid leaks
- Punctures of the skull base
- Strokes following accidental arterial injury
Regulators have not conclusively attributed each case to AI errors rather than device misuse, but investigation reports repeatedly describe scenarios where the navigation system mis-located instruments, and surgeons followed the guidance. The manufacturer has disputed some allegations but has also updated software and provided further training.
These cases illustrate the stakes: small positioning errors in an AI-enhanced tool can translate directly into physical harm.
AI Doctors Removed From App Stores
Consumer-facing “AI doctor” apps have also triggered interventions.
Reuters documented how an app called “Eureka Health: AI Doctor” marketed itself with language encouraging users to “become [their] own doctor.” After Reuters asked questions, Apple removed the app from its store under guidelines that restrict unapproved medical diagnostic claims.
The same investigation tested dermatology apps such as “AI Dermatologist”, which claimed over 97% accuracy for skin-cancer risk assessment. Users, however, had posted many critical reviews and screenshots showing misidentified or trivialized lesions. Independent analyses and professional bodies have separately warned that direct-to-consumer skin cancer apps are often unreliable and poorly regulated, echoing concerns from the British Association of Dermatologists and other groups.
These ‘AI doctors’ typically include disclaimers stating they are “for information only,” but users may still interpret outputs as diagnostic opinions.
When AI Doctors Cause Anxiety Instead of Clarity
Reuters also described an 18-year-old cancer patient in Turkey who used ChatGPT to ask about a persistent cough. The chatbot listed relapse and lung metastasis among possible explanations. The patient then feared cancer had returned until clinicians later attributed the cough to smoking.
In this case, AI did not give a formal diagnosis. It simply included a serious disease in a list of possibilities. For a person with a cancer history, that phrasing was enough to generate intense anxiety.
This illustrates another dimension of harm: AI can increase distress even when it does not make a specific error, simply by presenting worst-case scenarios without context or follow-up care.
The Hidden Risks of AI Medicine / AI Doctors Most People Don’t See
No Physical Exam, No Clinical Intuition
AI medical tools work from text and, in some cases, images. They cannot perform a physical examination, listen to your breathing, palpate your abdomen, or observe your gait and overall appearance.
Clinicians routinely adjust their judgement based on non-verbal cues: whether a patient looks acutely ill, how easily they speak, or how they move while describing symptoms. An AI doctor, by design, has access only to the words you type or the images you upload. If you omit a detail, mis-describe a symptom, or focus on the wrong complaint, the system has no secondary channel to compensate.
This limitation is structural and does not disappear with more training data.
Bias in Medical AI Training Data
AI inherits the properties of the data used to train it. In healthcare, many datasets already contain well-documented biases.
Pulse oximeters and skin tone
A 2020 New England Journal of Medicine correspondence showed that pulse oximeters (devices used to measure blood oxygen saturation) were more likely to overestimate oxygen levels in Black patients than in white patients, leading to “occult hypoxemia” (low oxygen not flagged by the monitor) more often in Black patients.
Follow-up analyses and regulatory discussions have repeated the concern that sensor performance and calibration did not account adequately for darker skin tones.
Race-adjusted kidney function (eGFR)
For years, widely used equations for estimated glomerular filtration rate (eGFR) included a race-based adjustment that tended to give Black patients higher estimated kidney function than non-Black patients with the same lab results. Research and legal challenges have linked this to delayed recognition of chronic kidney disease and later eligibility for transplant waiting lists.
In response, new race-free CKD-EPI equations were introduced in 2021, and US transplant authorities required transplant centres to stop using race-adjusted eGFR in 2022.
These examples are not themselves AI models, but they show how clinical tools can systematically disadvantage some groups if design and validation overlook diversity. When AI models – including apps commercialized as being AI doctors – are trained on records generated under such conditions, they can carry forward the same inequities.
Why AI Doctors Sound More Confident Than They Should
Large language models are optimised to produce fluent, coherent text, not to express calibrated uncertainty. Unless explicitly constrained, they tend to:
- Provide complete-sounding answers even when underlying evidence is weak or mixed
- Fill gaps with plausible details rather than leaving blanks
- Use confident phrasing that resembles expert opinion
In consumer interfaces, these properties can make AI doctors responses feel more authoritative than they are. A user reading “This is likely X” or “This is probably Y” in a well-structured paragraph may not realise that, statistically, model-level accuracy on similar questions sits around 50–60% in pooled studies.
When Should You NOT Use AI Doctors?
Emergency Symptoms That Require Immediate Medical Care
For certain symptoms, the safest approach is straightforward: bypass all the AI doctors and contact emergency services or your local urgent care line directly. These include:
- New chest pain, chest pressure, or tightness, especially if it radiates to arm, jaw, or back
- Sudden difficulty breathing or shortness of breath at rest
- Signs of stroke (face drooping, arm weakness, speech difficulties, sudden confusion)
- Sudden, severe headache unlike previous headaches
- Heavy or uncontrolled bleeding
- High fever accompanied by confusion, stiff neck, or difficulty staying awake
- Severe abdominal pain with vomiting or a rigid abdomen
These red-flag symptoms concerning AI doctors are well established in emergency medicine and not suitable for remote triage by unregulated tools.
Situations Where AI Doctors’ Advice Is Especially Risky
AI medical advice via the so-called AI doctors is also particularly risky when:
- You have a known serious condition (e.g., cancer, heart disease, organ transplant) and are worried about new symptoms
- You are pregnant and have pain, bleeding, or reduced fetal movements
- You are considering starting, stopping, or combining prescription medications
- You are dealing with acute mental health crises, including suicidal thoughts or psychosis
- You need clearance for surgery, major travel, or a high-risk activity
In these scenarios, decision-making depends on full access to your medical history, medication list, allergies, and recent test results, plus the ability to examine you in person. Current AI systems do not have that level of context or responsibility.
How to Use AI Doctors for Health Questions Safely
AI doctors can still be useful around your health if you use these AI doctors in the right way and keep them in the right role.
Use AI Doctors as Research Assistants, Not as Real Doctors
Appropriate uses include:
- Converting medical jargon from lab reports or imaging results into plain language
- Summarising general information from reputable guidelines or hospital websites
- Drafting a list of questions to bring to your appointment
- Organising notes after a consultation so you remember what was discussed
In all of these cases, the AI doctors help you understand information or prepare for conversations, but it does not decide what happens next. The decision remains with you and your clinicians.
Always Cross-Check AI Medical Advice
Whenever AI doctors, chatbots, or symptom checkers suggest a likely diagnosis or treatment:
- Look up the same condition on trusted health sites (national health services, major hospitals, or recognised international bodies).
- Check at least two independent sources to see whether the advice matches.
- Bring the information to your doctor and ask directly whether it applies to your specific case.
The Oxford trial data show that, even with access to advanced AI, people’s decisions about conditions and actions did not improve compared with traditional search.
Treat the output of these AI doctors as a starting hypothesis, but definitely not as a final answer. To get a final answer you must see a health specialist.
How to Spot Red-Flag AI Doctor Apps
Be cautious with AI doctors or health apps that:
- Claim or imply “doctor-level” or “better than doctor” accuracy, or when apps are called ‘AI doctors’
- Provide definitive diagnoses rather than describing possibilities and red-flag symptoms
- Lack clear information about who developed the tool, which clinicians are involved, and how it was validated
- Do not link to any peer-reviewed studies or regulatory approvals for their algorithms
- Focus on high-risk domains (oncology, cardiology, psychiatry) without transparent clinical evaluation
The Reuters investigation into AI medical apps found examples of tools that claimed over 97% accuracy yet delivered clearly incorrect dermatology assessments to users, while others invited people to “become [their] own doctor” despite disclaimers stating they were “not for diagnosis.”
Marketing language is not evidence of safety. Published validation and clear regulatory status are stronger indicators.
How to Protect Your Health Data When Using AI Doctors
Before you share detailed health information, photos, or documents with any AI tool or AI doctors:
- Check whether the provider states that it handles health data under relevant laws (e.g., HIPAA in the US, GDPR special-category rules in the EU).
- Read how data are stored, whether they are encrypted, and whether they are used for model training.
- Prefer tools that allow you to opt out of data reuse for model improvement.
- Avoid posting identifiable health information into public or experimental systems.
Even when medical advice is correct, loss of privacy or misuse of sensitive data is a separate form of harm.
Are AI Doctors Safe to Trust?
Current evidence from controlled tests, user studies, meta-analyses, and real-world incidents points in the same direction when it comes to the use of the so-called AI doctors:
- LLM chatbots can reach around 95% diagnostic accuracy on structured vignettes, but when ordinary users rely on them, correct conditions drop to 34.5% and correct actions to 44.2%, similar to standard web search.
- ChatGPT-class systems show an integrated medical accuracy of about 56% in pooled studies, while generative AI diagnostic tools as a group sit near 52.1% – far from clinical reliability standards.
- Symptom checkers lag behind GPs in diagnostic accuracy and show wide variation in triage, with reported triage accuracy ranging from 49% to 90% across tools.
- Misinformation studies show that AI models can accept and repeat false medical claims in nearly half of cases when those claims appear in professional-looking documents.
- Real-world deployments, from Watson for Oncology to AI-guided surgical navigation and consumer apps, have already produced unsafe recommendations, documented injuries, regulatory concern, and app-store removals.
AI is clearly useful in medicine – as a tool that helps clinicians search literature, summarise records, and explain complex information. Used that way, inside a supervised clinical workflow, it can save time and support decision-making.
For patients, the safest position today is:
- Use AI as a research assistant to understand terms, options, and questions.
- Do not treat any AI system, app, or chatbot, or even dedicated AI doctors as a substitute for clinical examination, diagnosis, or triage.
And keep in mind that based on current data, AI doctors are not yet reliable enough to carry the responsibility that the word “doctor” implies.
Become a Sponsor
Our website is the heart of the mission of WINSS – it’s where we share updates, publish research, highlight community impact, and connect with supporters around the world. To keep this essential platform running, updated, and accessible, we rely on the generosity of you, who believe in our work.
We offer the option to sponsor monthly, or just once choosing the amount of your choice. If you run a company, please contact us via info@winssolutions.org.
I specialize in sustainability education, curriculum co-creation, and early-stage project strategy. At WINSS, I craft articles on sustainability, transformative AI, and related topics. When I’m not writing, you’ll find me chasing the perfect sushi roll, exploring cities around the globe, or unwinding with my dog Puffy — the world’s most loyal sidekick.
