The use of AI in healthcare is gaining traction, but a recent study found that the ChatGPT system underperformed on a self-assessment tool in the field of urology. The chatbot produced less than 30% correct answers, with many of its explanations being cyclic or redundant. Healthcare providers should conduct thorough testing before relying on AI tools to reduce the risk of disseminating medical misinformation.
ChatGPT, an AI chatbot created to offer health advice, has scored poorly on a urologist's practice exam due to the repetition of its cyclical responses to open-ended questions. Scoring a low 26.7%, the AI tool's errors were found to endanger the spread of incorrect medical information. Despite the chatbot's past successes, further refinement is needed before deploying it in clinical settings.
Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?