AI technologies, such as Stable Diffusion or OpenAI’s DALL-E, generate noticeably biased images when asked to create artworks of ‘African workers’ in comparison to ‘European workers’. The output images generated for the ‘African workers’ reveal a stereotyped representation with malnourished depictions and crude tools to carry out simple labors. Meanwhile, the images created for ‘European workers’ provide the exact opposite, with fitting work attire and happy European faces beaming alongside other individuals of similar racial characteristics.
Generative Artificial Intelligence seeks to replicate human processes of making decisions and creating artworks across a myriad of domains. This AI model can be used to generate an enormous number of images, text, or multimedia files, for example, OpenAI’s ChatGPT technology can create entire paragraphs that sound written by humans. Stable Diffusion specifically works by studying a vast quantity of images available on the web. Lack of appropriate data to train these AI models may lead to further oversights and potential bias.
It is highly questioned if this technology is launched in a reckless manner where the consequences are addressed later. On the other side, advocates argue the possible benefits of AI that could ultimately improve productivity could outweigh the risks. For example, last year the AI based avatar app Lensa faced intense criticism for overly sexualized images of women in comparison to simple PG-friendly outputs for its male avatars.
Stable Diffusion performs its image classification on LAION-5B, a widely open-source dataset. This accessibility to the dataset allows anyone to trace back the source of the damaging outputs. A small search on the web can reveal similar images. An undisclosed person with a PhD in AI, who spoke to Insider, suggested improving data collection methods with safety protocols to avoid creating stereotypical output. Similarly, Sasha Luccioni, a researcher at Hugging Face, proposed labeling model outputs with disclaimers that reflect potential biases.
Stability AI, the company behind the tool, failed to comment on the situation. Safety mechanisms like the ones mentioned above by Luccioni could give users of AI based technologies a better understanding of the information they are consuming from the technology.
Stability AI is a leading software development company specialized in Artificial Intelligence models. The company has a team of experts that actively seek and develop powerful AI models to help automate the mundane. From facial recognition models for law enforcement to generative models to create artworks, StabilityAi is poised to develop the future of Artificial Intelligence and its applications.
Sasha Luccioni, who mentioned the importance of labels in AI models, is a respected AI researcher and a graduate of the Hugging Face’s Masters in Artificial Intelligence program. Luccioni graduated Cum Laude and worked in different projects related to machine learning, natural language processing, computer vision, and AI optimization. Sasha is a key member in the Hugging Face research team, contributing significantly to their development of AI models.