GoogleChief Executive Officer Sundar PichaiAbout GoogleGemini The Gemini app, billed by the company as a revolutionary new search tool, has been criticized after some users asked it to generate images of historical figures, such as German soldiers from World War II and the Pope, who has always been a white male. Some of Gemini's images depict Nazi soldiers as black and Asian, and the Pope as a woman.
Faced with user dissatisfaction and bias in responses to the AI tool, Google has temporarily suspended the use of the Gemini image generator. Pichai wrote in an email to employees on Tuesday: "I want to address the issues that have arisen in the Gemini app, specifically involving question text and image responses. I know that some of these responses have been offensive to our users and demonstrated bias - to be clear, this is completely unacceptable and we got it wrong."
The Gemini image generator problem is a setback for Google’s push into artificial intelligence as the company tries to keep pace with rivals such as Microsoft. Last month, Google rebranded Bard, a chatbot it launched last year, as Gemini, describing the revamped product as itsStrongestLarge AI model.
Google and other tech companies claim they conduct extensive safety and ethical testing of their models, but Maria Curi, a tech policy reporter at Axios, said: "We don't know what the exact testing process is. Users have discovered historical inaccuracies, which raises the question of whether these models were pushed out to the world too soon."
In Pichai's memo, he said Google employees have been "working around the clock to address these issues. We've seen significant improvements across a variety of prompts." He added: "No AI is perfect, especially early in the industry's development, but we know the demands are high and we will keep working on it, no matter how long it takes. We will review what happened and make sure we fix it at scale."
AI-powered chatbots are also raising concerns about the role they could play in the upcoming U.S. election. A study released Tuesday found that Gemini and four other widely used AI tools produced inaccurate election information more than half the time, and even directed voters to non-existent polling locations. Experts worry that powerful new AI could lead voters to receive false and misleading information, and could even prevent people from going to the polls.