Google has decided to halt its Gemini AI image generation tool temporarily after it faced criticism for refusing to display images of white people. This decision came after the AI tool replaced white people with images of Black people, Native Americans, and Asian people, thus creating inaccurate historical images at times. The company, Alphabet, issued an apology for this incident.
Gemini, previously known as Google Bard, is a large-scale language model that offers human-like responses to users. While it aims to provide accurate and diverse information, the responses can vary based on contextual information, language, and training data used.
Recently, social media users raised concerns about Gemini refusing to show images of white people, stating that it could reinforce harmful stereotypes. When asked to display images of Black people, the AI provided photos of notable figures along with their contributions to society. However, it was reluctant to show images celebrating white diversity.
Google’s Gemini AI director, Jack Krawczyk, acknowledged the need to improve the depiction of diversity and stated that efforts are underway to address this issue promptly. Despite the criticism, Google remains committed to enhancing its AI technology and providing a wide range of visuals to users worldwide.
Since the release of other AI technologies in the market, such as OpenAI’s ChatGPT, Google has been actively working on improving its AI capabilities. The recent rebranding of Google Bard to Gemini signifies the company’s focus on delivering innovative solutions to users. With different subscription tiers available, including Gemini Ultra, Pro, and Nano, Google aims to cater to various user needs efficiently.
As Google continues to advance its AI technology, it remains crucial to address concerns related to diversity and representation. By incorporating feedback and making necessary improvements, Google aims to enhance the overall user experience and provide accurate and inclusive information through its Gemini AI tool.