Google is rushing to address concerns over its new AI-powered image creation tool after being accused of over-correcting to avoid potential racism. The tool, known as Gemini, was criticized for generating images that did not accurately represent historical figures, such as showing women and people of color when prompted for America’s founding fathers.
Jack Krawczyk, senior director for Gemini Experiences, acknowledged that the tool was missing the mark and stated that improvements are being made to address these issues promptly. This is not the first time AI technology has faced challenges related to diversity, with Google previously apologizing for labeling a photo of a black couple as gorillas in its photos app.
Critics have pointed out that Gemini’s results have been skewed towards certain demographics, leading to accusations of being overly woke. The company has emphasized the importance of accurate representation and is working to ensure that its image generation reflects its global user base. Mr. Krawczyk noted that historical contexts are nuanced and that adjustments will be made to accommodate these complexities.
The claims of bias and inaccuracies in AI-generated images have gained traction, particularly in right-wing circles in the US, where concerns about liberal bias on tech platforms are already prevalent. Despite these criticisms, Google remains committed to refining Gemini and addressing feedback to enhance the tool’s performance.
Overall, the company aims to strike a balance between inclusivity and accuracy in its image generation capabilities. The ongoing efforts to fine-tune the tool demonstrate Google’s commitment to improving diversity and representation in AI technology. As the company continues to evolve its AI offerings, the focus remains on providing users with high-quality and culturally sensitive content.