In 2023, artificial intelligence (AI) has become a prominent topic, with the emergence of large language models like OpenAI’s ChatGPT program. These models have provided people with access to AI programs that can assist with various tasks, from suggesting dinner recipes to explaining intricate theories. However, a recent study raises concerns about using ChatGPT for medical advice, emphasizing the need for caution.
The study, conducted by Sara Grossman, PharmD, Associate Professor of Pharmacy Practice at Long Island University, and her team, aimed to assess the capability of ChatGPT in the medical field. Over a 16-month period, the researchers posed real medical questions to Long Island University’s College of Pharmacy drug information service and challenged ChatGPT with the same queries.
The conclusion drawn from the study cautions healthcare professionals and patients about relying on ChatGPT as an authoritative source for medication-related information. Grossman advises individuals to verify the information provided by ChatGPT using trusted sources.
To evaluate ChatGPT’s performance, pharmacists involved in the study researched and answered 45 queries, which were then reviewed by a second investigator. The same questions, excluding six, for which there was a lack of available literature, were then posed to ChatGPT. Surprisingly, out of 39 questions, only 10 responses were deemed satisfactory.
One instance highlighted the potential dangers of using ChatGPT without additional verification. When researchers asked ChatGPT about a possible drug interaction between Covid-19 treatment Paxlovid and blood pressure-lowering medication verapamil, ChatGPT incorrectly stated that no interactions had been reported for that combination of drugs. In reality, the interaction between these medications can result in excessive lowering of blood pressure, leading to preventable side effects for patients.
For the 29 questions where ChatGPT provided inaccurate or incomplete responses, researchers identified that 11 questions were not directly answered, 10 answers were inaccurate, and 12 answers were incomplete.
As this study demonstrates, ChatGPT’s performance in the medical field is not yet up to par. Inconsistencies and inaccuracies in its responses could have serious consequences for patients relying solely on its information. Therefore, caution must be exercised, and individuals should cross-verify medication-related information obtained from ChatGPT with trusted sources.
The findings of this study shed light on the limitations of AI language models, such as ChatGPT, in the medical domain. While these tools can be beneficial for certain tasks, they should not be treated as infallible sources of medical advice. As technology advances, it is crucial to prioritize accuracy and reliability when utilizing AI in healthcare settings.
In conclusion, doctors and individuals seeking medical advice should exercise caution and utilize trusted sources alongside AI language models like ChatGPT. The study conducted by Sara Grossman and her team highlights the need for further development and improvement of AI in the medical field to ensure patient safety and accurate information dissemination.
Disclaimer: The information in this article is not intended or implied to be a substitute for professional medical advice, diagnosis, or treatment. All content, including text, graphics, images, and information, contained in this article is for general informational purposes only.