OpenAI’s ChatGPT chatbot has failed a US urologist exam, according to a study. The study, revealed in the journal Urology Practice, indicated that ChatGPT achieved below 30% accuracy rate in answering American Urologist Association’s Self-Assessment Study Program for Urology. It not only has a low rate of correct answers, but it also makes errors that could spread medical misinformation. The study excluded 15 questions containing visual information such as pictures. The chatbot gave accurate answers for less than 30% of SASP questions, whether multiple-choice or open-ended. The researchers suggested that while ChatGPT might perform well in factual recall tests, it fails on questions related to clinical medicine. It requires multiple overlapping facts, situations, and outcomes.
OpenAI’s Chatbot Fails U.S. Urologists’ Self-Assessment Test
Date:
Frequently Asked Questions (FAQs) Related to the Above News
Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.