Received medical advice from ChatGPT, husband ends up in hospital with psychosis

2025-08-12 20:34:55Kosova&Bota SHKRUAR NGA REDAKSIA VOX
ChatGPT

A man was accidentally poisoned and spent three weeks in hospital after turning to ChatGPT's artificial intelligence for medical advice.

An American medical journal reported that the 60-year-old man developed a rare health condition after removing kitchen salt from his diet and replacing it with sodium bromide. The man "decided to conduct the personal experiment" after consulting ChatGPT on how to reduce salt intake, according to an article in the Annals of Internal Medicine.

The experiment led to the development of bromysm, a condition that can cause psychosis, hallucinations, anxiety, nausea, and skin problems like acne.

The condition was common in the 19th and early 20th centuries, when bromine tablets were regularly prescribed as sedatives, for headaches and to control epilepsy. The tablets were believed to contribute to up to 8% of psychiatric admissions.

Today, the condition is virtually unknown, as sodium bromide is commonly used as a pool cleaner.

According to the medical document, the man arrived at an emergency department "expressing concern that his neighbor was poisoning him."

He later tried to leave the hospital before being visited and taking an antipsychotic medication treatment. The man, who had no previous record of mental health problems, spent three weeks in hospital.

Doctors later discovered that the patient had consulted ChatGPT for advice on removing salt from his diet, although they were unable to access his original conversation history.

They tested ChatGPT to see if it gave a similar result. The AI world began suggesting replacing salt with sodium bromide and "did not issue a specific health warning."

They said that "the case highlights how the use of artificial intelligence (AI) can potentially contribute to the development of adverse preventable health outcomes."

AI chatbots have long suffered from a problem known as hallucinations, meaning they make up facts. They can also give inaccurate answers to health questions, sometimes based on a wealth of information gathered from the internet.

Last year, a Google chatbot suggested that users should "eat stones" to stay healthy. The comments appeared to be based on satirical comments collected by Reddit and the website The Onion.

OpenAI said last week that a new update to its ChatGPT bot, GPT5, was able to provide more accurate answers to health questions.

The Silicon Valley business said it had tested its new tool using a series of 5,000 health questions designed to simulate common conversations with doctors.

An OpenAI spokesperson said: "You should not rely on the results of our services as a single source of truth or factual information, or as a substitute for professional advice."


Video