A recent series of studies and reports have highlighted significant concerns about the reliability of artificial intelligence (AI) tools in providing medical advice. According to a study published in The Lancet Digital Health and reported by euronews, large language models (LLMs) like OpenAI’s ChatGPT and others were found to accept false medical claims about 32% of the time. The study, conducted by researchers at Mount Sinai Health System, tested 20 different LLMs and found that misinformation presented in realistic medical language was often accepted as true. Dr. Eyal Klang, a co-author of the study, noted, 'Current AI systems can treat confident medical language as true by default, even when it’s clearly wrong.' The study also revealed that smaller or less advanced models believed false claims more than 60% of the time, whereas more robust systems like ChatGPT-4o did so only 10% of the time.
HEALTH
AI Tools Found Likely to Provide Incorrect Medical Advice

Studies reveal AI tools like ChatGPT often accept false medical claims, posing risks in healthcare. AI struggles to provide accurate advice, highlighting the need for improved safeguards.
Detailed Analysis
COVERAGE ACROSS SOURCES
How different outlets covered this story.
2 outlets · 2 articles
Filter:
BN
BOL News
Updated 1 day agoGH