New research by investigators from Mass General Brigham has found that artificial intelligence (AI) language models like ChatGPT are able to accurately identify appropriate imaging services for breast cancer screening and breast pain. The results suggest that large language models have the potential to assist decision-making for primary care doctors in evaluating patients and ordering imaging tests. The study, published in the Journal of the American College of Radiology, tested ChatGPT 4, a newer and more advanced version of the language model, and found that it outperformed its predecessor, ChatGPT 3.5, in choosing the right imaging test.
The researchers asked OpenAI’s ChatGPT 3.5 and 4 to help them decide which imaging tests to use for 21 made-up patient scenarios. When it came to breast cancer screenings, ChatGPT 4 answered an average of 98.4% of prompts correctly when given multiple-choice imaging options, compared to ChatGPT 3.5’s average of 88.9%. The study suggests that ChatGPT can act like a bridge between the referring healthcare professional and the expert radiologist, recommending the right imaging test at the point-of-care and reducing administrative time on both referring and consulting physicians.
ChatGPT, a large language model (LLM) built on data from the internet to answer questions in a human-like way, was introduced in November 2022. The first of its kind to test ChatGPT’s clinical decision-making abilities, researchers worldwide are now investigating how these AI tools can be used. The researchers suggest that integrating AI into medical decision-making could happen at the point-of-care, alerting primary care doctors to the best imaging options when data is entered into an electronic health record. A more advanced medical AI could be created using datasets from hospitals and research institutions to make it more specific to health-focused applications.
Marc D. Succi, MD, associate chair of Innovation and Commercialization at Mass General Brigham Radiology, said that before any AI is involved in medical decision-making, it would need to be extensively tested for bias, privacy concerns, and approved for use in medical settings. New regulations around medical AI could also play a big role in what makes it into patient care interactions.
Frequently Asked Questions (FAQs) Related to the Above News
What is the study by Mass General Brigham about?
The study by Mass General Brigham is about how AI language models like ChatGPT can accurately identify appropriate imaging services for breast cancer screening and breast pain.
What is ChatGPT 4?
ChatGPT 4 is a newer and more advanced version of the language model that was tested in the study.
What did the study find about ChatGPT 4?
The study found that ChatGPT 4 outperformed its predecessor, ChatGPT 3.5, in choosing the right imaging test. When it came to breast cancer screenings, ChatGPT 4 answered an average of 98.4% of prompts correctly when given multiple-choice imaging options, compared to ChatGPT 3.5's average of 88.9%.
How can AI language models like ChatGPT assist decision-making for primary care doctors?
AI language models like ChatGPT can assist decision-making for primary care doctors in evaluating patients and ordering imaging tests. They can act like a bridge between the referring healthcare professional and the expert radiologist, recommending the right imaging test at the point-of-care and reducing administrative time on both referring and consulting physicians.
What is ChatGPT?
ChatGPT is a large language model (LLM) built on data from the internet to answer questions in a human-like way.
When was ChatGPT introduced?
ChatGPT was introduced in November 2022.
What are some potential uses for AI tools like ChatGPT in healthcare settings?
Some potential uses for AI tools like ChatGPT in healthcare settings include integrating them into medical decision-making at the point-of-care, alerting primary care doctors to the best imaging options when data is entered into an electronic health record, and creating more advanced medical AIs using datasets from hospitals and research institutions to make them more specific to health-focused applications.
What concerns should be addressed before AI is involved in medical decision-making?
Before AI is involved in medical decision-making, it would need to be extensively tested for bias, privacy concerns, and approved for use in medical settings. New regulations around medical AI could also play a big role in what makes it into patient care interactions.
Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.