Artificial Intelligence in Healthcare: Opportunities and Dangers of Over-Reliance
Artificial intelligence can enhance healthcare by improving diagnostics and efficiency, yet real cases such as bromism caused by AI-influenced advice highlight the dangers of complete reliance.

AI systems are increasingly used to assist with healthcare decisions. They provide rapid analysis of complex datasets, generate patient-friendly explanations, and offer guidance on diet, medication, and lifestyle. While these tools improve accessibility and efficiency, they are not infallible. Without the ability to interpret context, assess risks, or ensure accuracy, AI may produce misleading or harmful advice. A striking example is the recent report of a patient who developed bromism after following AI-generated recommendations regarding dietary salt substitution (NBC News report) and the detailed case study published in the Annals of Internal Medicine (Eichenberger et al., 2025).
The Benefits of AI in Healthcare
Enhanced Diagnostics
AI-based algorithms are capable of detecting subtle radiological changes or lab abnormalities that may escape human observation. By integrating imaging, laboratory data, and patient records, these systems improve diagnostic accuracy and speed.
Predictive and Preventive Care
Through large-scale data analysis, AI can predict disease risks, hospital readmissions, or complications in surgical recovery. This allows for timely interventions and preventive strategies.
Improved Accessibility
Virtual assistants and AI chatbots improve access to health information, reduce the workload of clinicians, and empower patients with quick responses to basic questions.
Research and Development
AI accelerates drug discovery and clinical research by identifying molecular targets, analyzing trial data, and simulating outcomes.
Case Study: Bromism Influenced by ChatGPT
A 60-year-old man with no prior medical or psychiatric history presented with paranoia, hallucinations, and metabolic disturbances. Laboratory results showed hyperchloremia, a negative anion gap, and electrolyte abnormalities. During hospitalization, he disclosed that he had been attempting to eliminate chloride from his diet after reading about the health risks of sodium chloride. Seeking alternatives, he consulted ChatGPT, which suggested bromide as a possible substitute.
For three months, he replaced dietary sodium chloride with sodium bromide purchased online. This led to bromide toxicity (bromism), manifesting as psychiatric symptoms, dermatologic changes, ataxia, and electrolyte imbalance. His bromide levels were markedly elevated (1700 mg/L; reference range 0.9–7.3 mg/L). After intravenous fluids, electrolyte correction, and discontinuation of bromide intake, his symptoms resolved.
This case underscores how AI can inadvertently contribute to life-threatening outcomes when its advice is misinterpreted or followed without professional supervision.
This case has been described in detail in a peer-reviewed report, A Case of Bromism Influenced by Use of Artificial Intelligence (Eichenberger et al., 2025), and also covered in mainstream media (NBC News, 2025).
Risks of Over-Reliance on AI
- Data Quality and Bias: AI relies on training data. Incomplete or biased data leads to inaccurate recommendations.
- Lack of Context: AI cannot fully interpret a patient’s medical history, comorbidities, or psychosocial background. The patient with bromism illustrates this gap: no AI system would have considered the nutritional and toxicological implications of replacing chloride with bromide.
- Misinformation and “Hallucinations”: AI may generate scientifically plausible but incorrect information. The suggestion of bromide as a dietary substitute is an example of misleading advice framed as neutral information.
- Absence of Accountability: Unlike clinicians, AI has no responsibility for errors. It cannot be questioned or held liable for the harm caused by its output.
Why Human Expertise is Essential
Medicine requires more than factual recall. It involves clinical reasoning, empathy, ethical decision-making, and the ability to adapt advice to an individual’s circumstances. Human experts would recognize the dangers of bromide ingestion and never suggest its use as a dietary salt substitute. AI lacks this protective judgment.
Recommendations for Safe Use of AI in Healthcare
- Use AI as a Supportive Tool: Patients and clinicians should treat AI outputs as supplementary, never definitive.
- Improve Transparency: Developers should ensure AI systems provide disclaimers, highlight uncertainty, and discourage unsupervised use.
- Strengthen Regulation: Oversight is required for AI health tools to prevent dissemination of harmful medical misinformation.
- Encourage Patient Education: Patients must understand that AI cannot replace professional medical consultation.
Conclusion
Artificial intelligence has great potential to improve healthcare efficiency, accuracy, and accessibility. Yet real-world cases such as bromism resulting from AI-influenced advice illustrate the dangers of unsupervised reliance on these tools. No computer system can replace the human mind, with its unique combination of knowledge, experience, empathy, and accountability. The safest model for healthcare is one where AI augments human expertise while final responsibility rests with trained medical professionals.
This article has been fact checked for accuracy, with information verified against reputable sources. Learn more about us and our editorial process.
Last reviewed on .
Article history
- Latest version
Reference(s)
- Eichenberger, Audrey., et al. “A Case of Bromism Influenced by Use of Artificial Intelligence.” Annals of Internal Medicine: Clinical Cases, vol. 4, no. 8, 5 August 2025, doi: 10.7326/aimcc.2024.1260. <https://www.acpjournals.org/doi/full/10.7326/aimcc.2024.1260>.
- Madani, Doha. “Man who asked ChatGPT about cutting out salt from his diet was hospitalized with hallucinations.”, 14 August 2025 NBC News <https://www.nbcnews.com/tech/tech-news/man-asked-chatgpt-cutting-salt-diet-was-hospitalized-hallucinations-rcna225055>.
Cite this page:
- Posted by Dayyal Dungrela