Connect with us

The Plunge Daily

Man Hospitalized with ‘Bromism’ After Following ChatGPT’s Dangerous Dietary Advice

Man Hospitalized with 'Bromism' After Following ChatGPT’s Dangerous Dietary Advice

ChatGPT

Man Hospitalized with ‘Bromism’ After Following ChatGPT’s Dangerous Dietary Advice

The man, aiming to eliminate sodium chloride (common table salt) from his diet, turned to the AI chatbot for a substitute. According to the case study, ChatGPT suggested sodium bromide—a compound once used as a sedative but now largely restricted to industrial cleaning, manufacturing, and agriculture due to its toxicity.

A 60-year-old man seeking a healthier lifestyle ended up in the hospital with ‘bromism’ after replacing table salt with a toxic industrial chemical—on the recommendation of ChatGPT. The incident, detailed in the Annals of Internal Medicine, has ignited fresh debate over the safety of using artificial intelligence for medical advice.

The man, aiming to eliminate sodium chloride (common table salt) from his diet, turned to the AI chatbot for a substitute. According to the case study, ChatGPT suggested sodium bromide—a compound once used as a sedative but now largely restricted to industrial cleaning, manufacturing, and agriculture due to its toxicity.

Over three months, the man unknowingly consumed the hazardous chemical. By the time he sought medical help, he was experiencing severe symptoms: fatigue, insomnia, poor coordination, skin issues, excessive thirst, paranoia, and hallucinations. Doctors diagnosed him with bromism, a rare but dangerous condition caused by long-term bromide exposure.



Medical staff placed the man on intravenous fluids, electrolytes, and antipsychotic medication. After three weeks of monitoring, he was discharged.

This bromism case underscores a troubling reality—AI systems like ChatGPT are not medical professionals. Experts stress that large language models (LLMs) do not fact-check in real-time and may surface outdated or inappropriate recommendations. “ChatGPT’s bromide blunder shows why context is king in health advice,” said Dr. Harvey Castro, an emergency medicine physician and AI researcher.

AI models generate responses by predicting the most statistically likely word sequences based on their training data. If that data includes outdated or chemically specific references, harmful suggestions can appear in casual health conversations.

Adding to the risk is a “regulation gap” in global AI oversight. While substances like sodium bromide are banned in food contexts, there is no direct regulation preventing AI from recommending them.

OpenAI, the developer of ChatGPT, reiterated in a statement that the system is “not intended for use in the treatment of any health condition” and encouraged users to seek professional guidance.

Experts call for integrating AI health advice, with verified medical databases, adding automated safety flags, and requiring human oversight in high-risk contexts like healthcare. Without such safeguards, rare but dangerous incidents could continue.

The case serves as a stark reminder: AI can be a powerful tool for research, but when it comes to your health, nothing replaces a qualified medical professional’s judgment.


Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

To Top
Loading...