Recent studies, particularly one published in Nature Medicine and led by the University of Oxford’s Internet Institute, have raised significant concerns about the effectiveness of AI chatbots in providing medical advice. The study, involving 1,298 participants in Britain, found that AI tools like OpenAI's Chat GPT-4o, Meta's Llama 3, and Cohere's Command R+ did not outperform traditional methods such as internet searches or the National Health Service website in helping users identify medical conditions and decide on the appropriate course of action. Specifically, AI users correctly identified relevant conditions in less than 34.5% of cases and chose the right course of action in under 44.2% of cases, which was comparable to those using traditional resources (Tribune Latest, February 10, 2026). Adam Mahdi, a co-author of the study, emphasized the 'huge gap' between AI's theoretical potential and its practical application, noting that while the knowledge exists within these systems, it often fails to translate effectively in real-world human interactions (Storyboard18, February 10, 2026).
TECHNOLOGY
Oxford Study Shows AI Ineffective for Medical Advice Compared to Internet

AI chatbots like Chat GPT-4o struggle in medical advice, with only 34.5% accuracy in condition identification, highlighting a gap between AI potential and real-world application, says Oxford study.
Detailed Analysis
COVERAGE ACROSS SOURCES
How different outlets covered this story.
2 outlets · 2 articles
Filter:
DA
Dawn
Updated 11h agoTL