Google’s Gemini chatbot has recently come under scrutiny for its unexpected and potentially controversial responses. Users discovered that when asked to generate images of historical figures like Vikings or the Founding Fathers, the chatbot portrayed them with diverse ethnicities, including an image of George Washington as a black man and the pope as an Asian woman. This led to questions about the underlying programming and intent behind these unique depictions.
However, the surprises did not stop there. Gemini’s text responses also revealed a progressive bias, with the chatbot providing arguments in favor of affirmative action but refusing to present arguments against it. It even refused to write a job ad for a fossil-fuel lobby group, citing environmental concerns.
Some critics believe that these responses were not accidental but rather a deliberate decision by Google to promote a particular agenda. This has raised questions about the tech giant’s culture and whether it is using its influence for social engineering purposes. Google’s CEO, Sundar Pichai, has acknowledged the need to address these issues and has promised to make adjustments to Gemini.
This incident has highlighted the importance of thorough testing and oversight in the development of AI models. With Google facing scrutiny over Gemini’s behavior, the tech industry is once again reminded of the potential risks and consequences of deploying AI without proper calibration.
As Google works to fine-tune Gemini and address the concerns raised by its responses, all eyes are on the company to see how it will navigate this controversy and whether it will make changes to prevent similar incidents in the future.