Updated On: 08 August, 2025 09:16 AM IST | New Delhi | IANS
Researchers at the Icahn School of Medicine at Mount Sinai, US, revealed a critical need for stronger safeguards before such tools can be trusted in health care

Image for representational purpose only. Photo Courtesy: File pic
Amid increasing presence of Artificial Intelligence tools in healthcare, a new study warned that AI chatbots are highly vulnerable to repeating and elaborating on false medical information.
Researchers at the Icahn School of Medicine at Mount Sinai, US, revealed a critical need for stronger safeguards before such tools can be trusted in health care.
The team also demonstrated that a simple built-in warning prompt can meaningfully reduce that risk, offering a practical path forward as the technology rapidly evolves.