AI in Global Health Reproduces Inequality and Prejudice in Images, Study Finds
A recent study has revealed that artificial intelligence (AI) used in global health can inadvertently perpetuate existing inequality and prejudice. The study, published in The Lancet Global Health, highlights the ways in which AI reproduces biased imagery in depicting scenes related to healthcare. The researchers behind the study aimed to challenge stereotypes by asking AI to generate images devoid of global health tropes, such as white saviors and powerless victims. However, the results were problematic, showing distressing images of Black African people hooked to machines and receiving care, as well as white doctors providing care to Black patients. These images regurgitated long-standing inequalities embedded in public health and lacked dignity and respect for individuals from marginalized genders, races, ethnicities, and classes.
The experiment used a sophisticated AI model called Midjourney Bot Version 5.1 to convert textual prompts into lifelike graphics. The researchers fed the AI prompts that inverted the traditional premise, such as Black African doctors administering vaccines to poor white children. While the AI succeeded in creating separate images of suffering white children and Black African doctors when the prompt remained unchanged, it struggled when the prompts were altered. In fact, when the prompts explicitly asked for white doctors providing care, the AI consistently generated images of white doctors, often dressed in culturally offensive exotic clothing.
This study sheds light on the biases present in global health and its visual representations. Research has shown that global health publications often mirror racial, gendered, and colonial biases when depicting diseases. For example, stories on antibiotic resistance have used images of Black African women dressed in traditional outfits, while images of Asians and Muslim people have been used to depict COVID-19 stories. These misrepresentations normalize stereotypes and have harmful effects on marginalized communities, exacerbating existing structural racism and historical colonialism.
The researchers argue that generative AI should not be viewed as apolitical technology, as it inherently feeds on existing power imbalances and reality. AI is capable of identifying race, gender, and ethnicity from medical images that do not carry overt indications, and training AI on larger data sets can strengthen racial biases. As such, caution must be exercised when deploying emerging technologies like AI, especially in new and untested areas.
Although AI holds promise in global health, it also poses risks, as it can perpetuate stereotypes and create inappropriate automation. The researchers urge a careful examination of AI’s history and contexts before its deployment. They also highlight the need for better data sets and robust models of AI regulation, accountability, transparency, and governance. It is crucial to uncover who owns the data and who benefits from AI interventions in the Global South. By confronting the biases inherent in AI and global health, it becomes possible to shape these technologies in more inclusive and equitable ways.
In conclusion, this study serves as a wake-up call for the AI community and global health practitioners. AI has the potential to reshape healthcare, but without careful consideration and conscious efforts to address biases, it can inadvertently perpetuate inequality and prejudice. It is essential to challenge existing power imbalances and ensure that AI is harnessed in a way that promotes fairness, dignity, and respect for all individuals, regardless of their gender, race, ethnicity, or class. Only then can AI truly contribute to improving global health outcomes in a just and equitable manner.