OpenAI’s ChatGPT 4 has shown its ability to support clinical decision-making, including in choosing appropriate radiological imaging tests for breast cancer screening and breast pain, according to a study by investigators from Mass General Brigham in the US. The study found that large language models have the potential to assist primary care physicians and referring providers in evaluating patients and ordering imaging tests. The results were published in the Journal of the American College of Radiology. In this scenario, ChatGPT’s abilities were impressive, said Marc D. Succi, associate chair of Innovation and Commercialisation at Mass General Brigham Radiology. Succi sees ChatGPT acting as a trained consultant to recommend the right imaging test at the point of care.
During the study, investigators asked ChatGPT 3.5 and 4 to help them decide which imaging tests to use for 21 hypothetical patient scenarios involving breast cancer screening or breast pain using appropriateness criteria. The AI was tested by giving it open-ended questions and by providing a list of options. ChatGPT 4 outperformed 3.5, especially when given the available imaging options. When asked about breast cancer screenings and given multiple-choice imaging options, ChatGPT 3.5 answered an average of 88.9% of prompts correctly, while ChatGPT 4 achieved 98.4%. The study’s results suggest that ChatGPT could reduce administrative time and patient confusion and wait times, as well as optimising workflow and reducing burnout for referring and consulting physicians.
Frequently Asked Questions (FAQs) Related to the Above News
What is ChatGPT 4 and what are some of its capabilities?
ChatGPT 4 is an advanced chatbot technology developed by OpenAI. It has shown the ability to support clinical decision-making, particularly in choosing appropriate radiological imaging tests for breast cancer screening and breast pain.
What was the purpose of the study conducted by investigators from Mass General Brigham in the US?
The purpose of the study was to determine whether large language models like ChatGPT 4 could assist primary care physicians and referring providers in evaluating patients and ordering imaging tests.
How did ChatGPT 4 perform in the study compared to its predecessor, ChatGPT 3.5?
ChatGPT 4 outperformed ChatGPT 3.5, especially when given the available imaging options. When asked about breast cancer screenings and given multiple-choice imaging options, ChatGPT 3.5 answered an average of 88.9% of prompts correctly, while ChatGPT 4 achieved 98.4% accuracy.
What are some potential benefits of using ChatGPT in clinical decision-making?
According to the study, ChatGPT could reduce administrative time and patient confusion and wait times, as well as optimizing workflow and reducing burnout for referring and consulting physicians.
How does Marc D. Succi, associate chair of Innovation and Commercialisation at Mass General Brigham Radiology, envision ChatGPT being utilized?
Succi envisions ChatGPT acting as a trained consultant to recommend the right imaging test at the point of care, assisting primary care physicians and referring providers in evaluating patients and ordering imaging tests.
Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.