Read More
Cathay Pacific slashes fuel surcharges as Middle East tensions ease
08-05-2026 20:31 HKT

A recent case published in Annals of Internal Medicine has raised concerns about relying on artificial intelligence tools like ChatGPT for medical guidance. According to the report, a 60-year-old man developed a rare condition called bromism—a form of bromide poisoning—after following advice from the AI chatbot on reducing salt intake.
The incident began when the man, concerned about the negative health effects of excessive sodium chloride (table salt), asked ChatGPT for alternatives.
The chatbot reportedly suggested bromide salts as a possible substitute, mentioning they could be used for purposes like cleaning products but failing to warn about their toxicity. Acting on this information, the man began using sodium bromide in place of table salt for three months.
Over time, he developed severe psychiatric symptoms, including paranoid delusions that his neighbors were trying to poison him and an irrational fear of drinking water—even when thirsty.
Medical tests later confirmed bromide poisoning, which also caused acne-like skin lesions, extreme thirst, and insomnia.
The authors of the case study, researchers from the University of Washington in Seattle, noted that bromide salts were once used as sedatives in the early 20th century, with historical records linking them to about 10 percent of psychiatric hospital admissions in the U.S. at the time.
While attempting to review the patient’s ChatGPT interactions, the team found they could not access the original conversation logs. However, their own tests showed the AI did suggest bromide as a chloride alternative without providing critical health warnings or clarifying the user’s intent—a stark contrast to how medical professionals would approach such queries.
The researchers cautioned that AI tools like ChatGPT risk spreading scientific misinformation due to their lack of critical analysis and tendency to present plausible but potentially dangerous suggestions as factual.
In response to growing scrutiny, OpenAI, the company behind ChatGPT, announced upgrades to its GPT-5 model last week, claiming improved accuracy in health-related responses and better detection of high-risk questions involving physical or mental health emergencies.
However, the firm reiterated that its chatbot "is not a substitute for professional medical advice" and should not be used for diagnosing or treating health conditions.
The case has reignited debates about the ethical responsibilities of AI developers and the need for clearer disclaimers on platforms offering health-related information.
Medical experts continue to urge the public to consult licensed professionals rather than unverified digital sources for critical health decisions.
Download The Standard app to stay informed with news, updates, and significant events: