US Medical School Study Finds AI Chatbot Effectively Diagnoses Complex Cases
In a groundbreaking experiment, a US medical school has utilized Open AI’s Chat-GPT 4 to determine its ability to accurately diagnose challenging medical cases. Researchers at the Beth Israel Deaconess Medical Center (BIDMC) in Boston, Massachusetts, discovered that Chat-GPT 4 achieved the correct diagnosis in nearly 40 percent of cases.
The findings, published in the Journal of the American Medical Association (JAMA), revealed that Chat-GPT 4 also included the accurate diagnosis in its list of potential conditions in two-thirds of the complex cases.
Dr. Adam Rodman, co-director of the Innovations in Media and Education Delivery (iMED) Initiative at BIDMC, highlighted the progress made in artificial intelligence (AI) models, stating, Recent advances in artificial intelligence have led to generative AI models that are capable of detailed text-based responses that score highly in standardized medical examinations. Consequently, the researchers aimed to test if such a generative model could mimic the thinking process of a doctor when solving standardized complex diagnostic cases, and the results were extremely promising.
To evaluate the diagnostic skills of the chatbot, Dr. Rodman and his team utilized clinicopathological case conferences (CPCs). These conferences present a series of intricate patient cases, including relevant clinical and laboratory data, imaging studies, and histopathological findings, which are typically published in the New England Journal of Medicine for educational purposes.
In total, 70 CPC cases were assessed, and the AI proved to be highly accurate. It precisely matched the final CPC diagnosis in 39 percent of the cases and included the correct diagnosis in its list of potential conditions in 64 percent of the cases.
Although the research team acknowledges that chatbots cannot replace the expertise and knowledge of trained medical professionals, the implementation of generative AI demonstrates promising potential as an adjunct to human cognition in the diagnosis process. Dr. Zahir Kanjee, a hospitalist at BIDMC and assistant professor of medicine at Harvard Medical School, explained, It has the potential to help physicians make sense of complex medical data and broaden or refine our diagnostic thinking.
While this study contributes to the growing body of literature on AI technology in healthcare, more research is needed to fully understand its optimal uses, benefits, and limitations. Addressing privacy concerns is also pivotal in deciphering how these new AI models might transform healthcare delivery in the future.
In summary, this study conducted by a US medical school affirms the capability of AI chatbots, specifically Chat-GPT 4, to accurately diagnose challenging medical cases. Although AI cannot replace human expertise, it shows promising potential as a tool that can help healthcare professionals evaluate complex medical data and enhance diagnostic thinking. With continued research and development, AI technology is poised to revolutionize healthcare delivery, while prioritizing privacy concerns and medical professional expertise.