Google’s latest AI model, Gemini, faces backlash and criticism for generating inaccurate historical images of people. The system, designed to create diverse images, was found to be producing historically inaccurate depictions, such as showing exclusively Black individuals in Viking garb or Indigenous people as founding fathers.
Many critics, particularly from the far-right, have seized on this issue as evidence of anti-white bias within Big Tech. Entrepreneur Mike Solana even went as far as to label Google’s AI as an anti-white lunatic.
However, experts point out that the real issue lies in the limitations of generative AI systems like Gemini. Gary Marcus, a professor of psychology and AI entrepreneur, criticized the system as lousy software.
In response to the criticism, Google has acknowledged the problem and announced a temporary pause on the generation of images of people until a fix can be implemented. Jack Krawczyk, a senior director at Google, stated that the system missed the mark and that they are working to improve the accuracy of historical depictions.
Despite the controversy, some users have pointed out that not all interactions with Gemini result in historically inaccurate images, indicating that the errors are not universal.
The situation highlights the challenges and complexities of designing AI systems that can accurately represent historical contexts. Google’s efforts to address the issue and fine-tune Gemini to accommodate historical nuances demonstrate a commitment to improving the system’s accuracy and representation.
As the debate continues, it remains essential for AI developers to consider the implications of biased algorithms and strive for greater accuracy and inclusivity in image generation technology. Ultimately, the controversy surrounding Gemini serves as a reminder of the complexities and responsibilities that come with developing AI models for diverse global audiences.