In a major set of development and news, the World Health Organization (WHO) has given its opinion on Artificial Intelligence (AI). Calling for a careful approach and scrutiny when dealing with these technologies, the WHO has particularly focused its efforts on ChatGPT, a large language model tool that is deployed in various different applications.
The guidance issued by the healthcare organization points out that while the deployment of AI applications in public health is encouraged, it is important for policymakers to prioritize the safety and autonomy of individuals when using such systems. Examples of here that AI entails the risk of altering information and the impact this can have on public health matters. Furthermore, the use of such systems in poor and underdeveloped areas of the world should be conducted with caution in order to avoid causing mistrust of healthcare workers among patients.
The WHO has suggested evaluating large language model tools like ChatGPT, Bert and Bard based on five parameters which include the level of accuracy in results, the capacity to understand the context of natural language, avoiding the misappropriation of data, adoption of privacy protocols and implementation of ethics and governance measures.
To ensure optimal usage, the WHO recommends policymakers develop plans that prioritize “patient safety and protection” when deploying AI and other digital health applications. Furthermore, having access to evidence on the usage of these services is of utmost importance. This can be achieved by allowing the public, healthcare workers and administrators access to research material.
In conclusion, the WHO highlighted its commitment to the six principles of public health and guidelines for “AI for health”. These guidelines emphasize the critical need for governance and ethics when using AI technology for healthcare purposes. By doing so, the long-term benefits of AI and digital health can be fulfilled and fostered.