The free version of ChatGPT, a popular artificial intelligence-based chatbot, may provide incorrect or incomplete answers to drug-related questions, according to a new study. Pharmacists at Long Island University conducted a study where they posed 39 questions to the free ChatGPT, and only 10 of the responses were deemed satisfactory. The remaining answers either did not address the question, were inaccurate, incomplete, or both. The study raises concerns about potential risks to patients who rely on ChatGPT for medical information. Lead author Sara Grossman advises patients and healthcare professionals to verify any responses from the chatbot with trusted sources, such as doctors or reputable medication information websites. This study joins a growing number of investigations into ChatGPT’s accuracy and consumer protections. The free version of ChatGPT uses data sets up to September 2021, potentially lacking the most up-to-date medical information. The study’s findings focus on the free version to replicate what the majority of users have access to. While a paid version of ChatGPT may yield different results, the researchers aimed to evaluate the commonly used and accessible version. It’s important to note that the study provides only one snapshot of ChatGPT’s performance, and improved results are possible if a similar study were conducted now. Users should exercise caution when relying on ChatGPT for drug-related information and seek confirmation from trusted sources.
Study Suggests ChatGPT’s Inaccurate Drug Responses Endanger Patients
Date:
Frequently Asked Questions (FAQs) Related to the Above News
Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.