This research investigates the performance of GPT-3.5, GPT-4 and human users on a simulated ophthalmology exam provided by the American Academy of Ophthalmology. A comparison was made by established methodology, and outcomes of this comparison were verified. This research studied the ability of the computers to pass the US Medical Licensing Examination Step exams and the similarity between their performance and those of neurosurgery residents.
A recent study by Andrew Mihalache from the University of Western Ontario has determined that ChatGPT, an AI-based tool, correctly answered fewer than half of questions for the Ophthalmology board certification exam. Further research shows ChatGPT was most successful in the general medicine category. Despite this, the researchers recommend caution as it is still not reliable enough to prepare one for the board certification test.
Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?