AI’s Knowledge Gap: Caution Urged as ChatGPT Falls Short on Medical Advice, US

Date:

In 2023, artificial intelligence (AI) has become a prominent topic, with the emergence of large language models like OpenAI’s ChatGPT program. These models have provided people with access to AI programs that can assist with various tasks, from suggesting dinner recipes to explaining intricate theories. However, a recent study raises concerns about using ChatGPT for medical advice, emphasizing the need for caution.

The study, conducted by Sara Grossman, PharmD, Associate Professor of Pharmacy Practice at Long Island University, and her team, aimed to assess the capability of ChatGPT in the medical field. Over a 16-month period, the researchers posed real medical questions to Long Island University’s College of Pharmacy drug information service and challenged ChatGPT with the same queries.

The conclusion drawn from the study cautions healthcare professionals and patients about relying on ChatGPT as an authoritative source for medication-related information. Grossman advises individuals to verify the information provided by ChatGPT using trusted sources.

To evaluate ChatGPT’s performance, pharmacists involved in the study researched and answered 45 queries, which were then reviewed by a second investigator. The same questions, excluding six, for which there was a lack of available literature, were then posed to ChatGPT. Surprisingly, out of 39 questions, only 10 responses were deemed satisfactory.

One instance highlighted the potential dangers of using ChatGPT without additional verification. When researchers asked ChatGPT about a possible drug interaction between Covid-19 treatment Paxlovid and blood pressure-lowering medication verapamil, ChatGPT incorrectly stated that no interactions had been reported for that combination of drugs. In reality, the interaction between these medications can result in excessive lowering of blood pressure, leading to preventable side effects for patients.

See also  New 'Temporary Chats' Feature Enhances Privacy Control on OpenAI ChatGPT

For the 29 questions where ChatGPT provided inaccurate or incomplete responses, researchers identified that 11 questions were not directly answered, 10 answers were inaccurate, and 12 answers were incomplete.

As this study demonstrates, ChatGPT’s performance in the medical field is not yet up to par. Inconsistencies and inaccuracies in its responses could have serious consequences for patients relying solely on its information. Therefore, caution must be exercised, and individuals should cross-verify medication-related information obtained from ChatGPT with trusted sources.

The findings of this study shed light on the limitations of AI language models, such as ChatGPT, in the medical domain. While these tools can be beneficial for certain tasks, they should not be treated as infallible sources of medical advice. As technology advances, it is crucial to prioritize accuracy and reliability when utilizing AI in healthcare settings.

In conclusion, doctors and individuals seeking medical advice should exercise caution and utilize trusted sources alongside AI language models like ChatGPT. The study conducted by Sara Grossman and her team highlights the need for further development and improvement of AI in the medical field to ensure patient safety and accurate information dissemination.

Disclaimer: The information in this article is not intended or implied to be a substitute for professional medical advice, diagnosis, or treatment. All content, including text, graphics, images, and information, contained in this article is for general informational purposes only.

Frequently Asked Questions (FAQs) Related to the Above News

What is ChatGPT and what tasks can it assist with?

ChatGPT is an AI language model developed by OpenAI. It can assist with various tasks, such as suggesting dinner recipes and explaining intricate theories.

What did the recent study by Sara Grossman and her team aim to assess?

The study aimed to assess the capability of ChatGPT in the medical field.

How long did the study conducted by Sara Grossman and her team last?

The study lasted for 16 months.

What caution does the study recommend regarding using ChatGPT for medical advice?

The study cautions healthcare professionals and patients against relying on ChatGPT as an authoritative source for medication-related information. It advises individuals to verify the information provided by ChatGPT using trusted sources.

How were the performance and responses of ChatGPT evaluated in the study?

Pharmacists involved in the study researched and answered 45 queries, which were then reviewed by a second investigator. The same questions, except for six, were posed to ChatGPT. The responses provided by ChatGPT were evaluated for accuracy and completeness.

What potential dangers were highlighted in the study when using ChatGPT without additional verification?

The study highlighted that ChatGPT provided incorrect information regarding a drug interaction between Covid-19 treatment Paxlovid and blood pressure-lowering medication verapamil. This incorrect information could lead to preventable side effects for patients.

How many out of 39 questions posed to ChatGPT were deemed satisfactory in the study?

Only 10 out of 39 responses from ChatGPT were deemed satisfactory by the researchers.

What were the identified issues with the responses provided by ChatGPT in the study?

The researchers identified that 11 questions were not directly answered, 10 answers were inaccurate, and 12 answers were incomplete.

What are the limitations of AI language models like ChatGPT in the medical domain?

The study demonstrates that ChatGPT's performance in the medical field is not yet up to par. It shows inconsistencies and inaccuracies in its responses, which could have serious consequences for patients relying solely on its information.

What should doctors and individuals seeking medical advice do when utilizing AI language models like ChatGPT?

Doctors and individuals should exercise caution and utilize trusted sources alongside AI language models like ChatGPT. The study highlights the need for further development and improvement of AI in the medical field to ensure patient safety and accurate information dissemination.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

WhatsApp Unveils New AI Feature: Generate Images of Yourself Easily

WhatsApp introduces a new AI feature, allowing users to easily generate images of themselves. Revolutionizing the way images are interacted with on the platform.

India to Host 5G/6G Hackathon & WTSA24 Sessions

Join India's cutting-edge 5G/6G Hackathon & WTSA24 Sessions to explore the future of telecom technology. Exciting opportunities await! #IndiaTech #5GHackathon

Wimbledon Introduces AI Technology to Protect Players from Online Abuse

Wimbledon introduces AI technology to protect players from online abuse. Learn how Threat Matrix enhances player protection at the tournament.

Hacker Breaches OpenAI, Exposes AI Secrets – Security Concerns Rise

Hacker breaches OpenAI, exposing AI secrets and raising security concerns. Learn about the breach and its implications for data security.