Get fast, evidence-based answers to your public health questions from ChatGPT, an AI assistant developed by UC San Diego. Its responses were evaluated for addiction, mental health, violence, and physical health. While explicit resource recommendations were few, the authors encourage collaboration between AI companies and health organizations for efficient health resources.
Despite being a popular source of information, ChatGPT falls short in addressing public health issues such as addiction and domestic violence, according to a study published in JAMA. While ChatGPT's responses are well-informed, they frequently fail to provide necessary referrals to resources. Experts suggest AI companies prioritize human touch and build partnerships with public health leaders to develop databases of resources and refine models for specialized medical guidance. It's important for users to understand ChatGPT's limitations in responding to health emergencies.
Study reveals that AI assistants like ChatGPT deliver 91% evidence-based responses to public health questions. The researchers recommend AI assistants to direct people to helpful resources such as helplines.
Artificial intelligence (AI) platforms like ChatGPT may soon be able to assist those with physical and mental health questions. However, a study found that AI systems lack necessary referrals to specific resources for successful health treatment. Connecting individuals to trained professionals should be a priority for AI systems to improve public health outcomes. While relying on AI for health information is common, trained professionals remain the best source for treatment.
AI is progressing quickly and poses significant risks, warns tech leaders. Regulating AI is crucial to prevent job market issues and discrimination. #AIrisks #techleaders #regulation
Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?