Researchers at Long Island University have found that the popular artificial intelligence language model ChatGPT struggles to accurately answer medical questions. In a study, 39 real-life medication-related queries were posed to the free version of ChatGPT, and the results were far from impressive. Out of the 39 questions, only 10 received accurate responses, while the remaining 29 prompts were met with incomplete or inaccurate answers, or no response at all.
What’s more concerning is that when researchers requested scientific sourcing for the answers, ChatGPT fabricated references and citations in some cases. This raises doubts about the reliability and credibility of the information provided by the platform.
OpenAI, the organization behind ChatGPT, made it clear that users should not rely on the model’s responses as a substitute for professional medical advice or treatment. While AI has the potential to revolutionize various industries, including healthcare, this study highlights the limitations and potential dangers of relying solely on AI for medical advice.
The accuracy and completeness of medical information are crucial, particularly when people are seeking advice on medications and treatments. Inaccurate information may misguide patients and potentially lead to harmful consequences.
Dr. Lisa Johnson, a medical expert not involved in the study, expressed her concerns regarding the use of AI in healthcare. While AI can be a valuable tool, it should never replace the expertise and knowledge of medical professionals. Healthcare is a complex field, and the nuances of each patient’s condition cannot be fully understood or addressed by AI alone, said Dr. Johnson.
The implications of this study extend beyond personal queries. With the increasing popularity of AI-powered chatbots in healthcare settings, such as telemedicine platforms, the reliability and accuracy of these systems become crucial. Medical professionals and organizations need to be cautious about incorporating AI into their practices without robust evidence of its effectiveness.
To improve the accuracy and reliability of AI in the medical field, further research and development are necessary. The current study sheds light on the shortcomings of ChatGPT specifically, but it serves as a reminder to developers and researchers across the board to prioritize the integrity of the information provided by AI systems.
In conclusion, the study conducted at Long Island University has revealed that ChatGPT struggles to accurately answer medical questions. With only 10 out of 39 queries receiving accurate responses, medical personnel and users should exercise caution when relying on AI for medical advice. The need for human expertise and judgment in the healthcare industry remains paramount, and the limitations of AI models should be acknowledged and addressed.