The AI-powered chatbot, ChatGPT, has been under scrutiny due to its subpar performance in diagnosing medical conditions, according to a recent study.
Researchers tested the chatbot’s ability to assess 150 case studies from the medical website Medscape and found that it provided accurate diagnoses only 49% of the time. This highlights the limitations of relying on AI for complex medical cases that require human expertise.
ChatGPT’s training data, sourced from a variety of texts, allows it to provide responses based on patterns it has learned. However, the study revealed that the chatbot often ‘hallucinates,’ giving inaccurate information or making up responses entirely.
Despite its shortcomings, the researchers believe that AI and chatbots could still be valuable tools in educating patients and trainee doctors, as long as they are supervised and accompanied by fact-checking measures.
The study emphasizes the importance of understanding the limitations of AI in the medical field and the need for human doctors to remain central to the diagnostic process. While AI has the potential to enhance clinical decision-making and patient engagement, it must be used with caution and in conjunction with human expertise.