WHO Issues New Guidance on Ethics & Governance of AI in Healthcare
Geneva, January 18, 2024 – In a move to address the ethical and governance challenges posed by large multi-modal models (LMMs) in healthcare, the World Health Organization (WHO) has released new guidance. LMMs are a type of generative artificial intelligence (AI) technology that has experienced rapid growth and has various applications in the healthcare sector.
The WHO’s guidance comprises over 40 recommendations aimed at governments, technology companies, and healthcare providers. These recommendations aim to ensure the responsible and appropriate use of LMMs to safeguard the health of populations.
LMMs have the capability to accept different types of data inputs, such as text, videos, and images, and generate diverse outputs. What sets LMMs apart is their ability to mimic human communication and perform tasks not explicitly programmed. The adoption of LMMs has been unprecedented, with platforms like ChatGPT, Bard, and Bert gaining widespread recognition in 2023.
While LMMs offer potential benefits in healthcare, there are also inherent risks. The WHO’s guidance highlights five primary applications of LMMs in health: diagnosis and clinical care, patient-guided use, clerical and administrative tasks, medical and nursing education, and scientific research and drug development.
One of the major concerns is the production of false, inaccurate, biased, or incomplete information by LMMs, which could lead to harmful health decisions. LMMs may also be trained on biased or poor-quality data, perpetuating inequalities based on race, ethnicity, gender, or age.
The guidance also emphasizes broader risks to health systems, including the accessibility and affordability of the most effective LMMs. Additionally, there is a possibility of automation bias where healthcare professionals and patients rely too heavily on LMMs, potentially overlooking errors or delegating complex decisions to the technology. Cybersecurity risks are another concern, as the protection of patient information and the trustworthiness of algorithms are critical.
To ensure the development and deployment of safe and effective LMMs, the WHO underscores the importance of engaging various stakeholders such as governments, technology companies, healthcare providers, patients, and civil society throughout the process. Transparent oversight and regulation are crucial in managing the design, development, and use of LMMs to achieve better health outcomes and address existing health inequities.
The guidance also includes key recommendations for governments, such as investing in accessible and ethical infrastructure, ensuring LMMs meet ethical and human rights standards, assigning regulatory agencies for assessment and approval of LMMs, and implementing post-release auditing and impact assessments.
For developers of LMMs, the guidance emphasizes the importance of inclusive design involving potential users, medical providers, researchers, and patients from the early stages. LMMs should be designed to fulfill well-defined tasks accurately and reliably to enhance health systems and prioritize patient interests.
In conclusion, the WHO’s new guidance on the ethics and governance of AI in healthcare sets out comprehensive recommendations for the responsible use of LMMs. As these powerful technologies continue to transform healthcare, it is crucial to prioritize ethics, transparency, and the protection of human rights to ensure their potential benefits are maximized while mitigating risks. Governments, technology companies, healthcare providers, and society at large must collaborate to harness the potential of LMMs while safeguarding the health and well-being of populations.