People are increasingly turning to ChatGPT, an artificial intelligence chatbot from OpenAI, for various types of information, including medical queries. However, a study published in the journal JAMA Network Open suggests that ChatGPT falls short in certain areas, particularly in its responses to appeals for help with health crises. Researchers found that when asked for help with public health issues such as addiction, domestic violence, and suicidal tendencies, ChatGPT frequently failed to provide referrals to appropriate resources. Of 23 public health questions posed by the researchers, just 22 percent of the 25 responses that ChatGPT offered included such referrals.
In many cases, the responses offered by ChatGPT were well-informed and matched those that might be given by subject matter experts. However, professionals have pointed out that AI companies need to offer a more comprehensive approach to public health, prioritizing human touch as well as AI accuracy. Study co-author John W Ayers suggested that AI companies should be encouraged or even mandated to promote essential resources and establish partnerships with public health leaders to develop recommendations such as databases of recommended resources. Harvey Castro, a board-certified emergency medicine physician, observed that AI models should be refined to offer more specialized medical guidance, and that users need to know the tool’s limitations.