A recent study presented at the American Society of Health-System Pharmacists Midyear Clinical Meeting in Anaheim, California, has raised concerns about the accuracy and reliability of ChatGPT when it comes to answering drug-related questions.
The study, conducted by Sara Grossman, Pharm.D., from Long Island University in New York, evaluated ChatGPT’s performance by posing 39 previously submitted questions to a drug information service.
Out of the 39 questions, ChatGPT responded to 72% of them. However, the study found that only 36% of ChatGPT’s responses were accurate, complete, and free of irrelevant information. The remaining 64% of responses were either inaccurate or incomplete.
The study also examined the complexity of the questions and found that only one-third of complex queries and 23% of noncomplex questions received satisfactory responses from ChatGPT.
In light of these findings, the study’s lead author, Sara Grossman, warned that healthcare professionals and patients should exercise caution when using ChatGPT as a source of medication-related information.
Grossman emphasized the importance of verifying information obtained from ChatGPT using trusted and authoritative sources.
This study highlights the limitations of AI language models like ChatGPT when providing accurate and reliable information in specialized fields such as pharmacology and medicine.
While AI can be a valuable tool for providing general information and assistance, verifying critical information with experts and trusted sources is essential, particularly in areas where accuracy is paramount, like healthcare.