health
February 27, 2026
‘Unbelievably dangerous’: experts sound alarm after ChatGPT Health fails to recognise medical emergencies
Study finds ChatGPT Health did not recommend a hospital visit when medically necessary in more than half of cases

TL;DR
- A study evaluating ChatGPT Health found it missed the need for urgent medical care in over 50% of tested scenarios.
- The AI platform also failed to consistently detect suicidal ideation, particularly when normal lab results were provided.
- Experts warn that these failures could lead to unnecessary harm, delayed treatment, and even death for users relying on the AI for medical advice.
- OpenAI suggested the study did not represent typical user interactions and that the model is constantly being refined.
Continue reading the original article