Google is working to fix its Gemini artificial intelligence (AI) tool, CEO Sundar Pichai told employees on Tuesday, calling the images generated by the model « biased » and « completely unacceptable. »
The Alphabet-owned company paused Gemini’s image generation feature last week after users on social media flagged that Gemini was creating inaccurate historical images that sometimes replaced White people with images of Black, Native American and Asian people.
In a note to employees, Pichai said the tool’s responses were offensive to users and had shown bias.
Ticker | Security | Last | Change | Change % |
---|---|---|---|---|
GOOG | ALPHABET INC. | 140.10 | +1.35 | +0.97% |
« Our teams have been working around the clock to address these issues. We’re already seeing a substantial improvement on a wide range of prompts … And we’ll review what happened and make sure we fix it at scale, » he said, according to Reuters.
GOOGLE ADMITS ITS GEMINI AI ‘GOT IT WRONG’ FOLLOWING WIDELY PANNED IMAGE GENERATOR: NOT ‘WHAT WE INTENDED’
The company now plans to relaunch Gemini AI in the next few weeks. News website Semafor first reported the news, which was later confirmed by a Google spokesperson.
Google has issued several apologies for Gemini after critics slammed the AI for creating « woke » content.
Prabhakar Raghavan, senior vice president of Google’s Knowledge & Information, said Friday that the AI tool « missed the mark » in a blog post explaining what went wrong.
While explaining « what happened, » Raghavan said Gemini was built to avoid « creating violent or sexually explicit images, or depictions of real people » and that various prompts should provide a « range of people » versus images of « one type of ethnicity. »
GOOGLE APOLOGIZES AFTER NEW GEMINI AI REFUSES TO SHOW PICTURES, ACHIEVEMENTS OF WHITE PEOPLE
« However, if you prompt Gemini for images of a specific type of person — such as ‘a Black teacher in a classroom,’ or ‘a white veterinarian with a dog’ — or people in particular cultural or historical contexts, you should absolutely get a response that accurately reflects what you ask for, » Raghavan wrote. « So what went wrong? In short, two things. First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive. »
« These two things led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong, » he continued. « This wasn’t what we intended. We did not want Gemini to refuse to create images of any particular group. And we did not want it to create inaccurate historical — or any other — images. »
GOOGLE TO PAUSE GEMINI IMAGE GENERATOR AFTER AI REFUSES TO SHOW IMAGES OF WHITE PEOPLE
Raghavan said Gemini’s image generation feature will go through « extensive testing » before it is back online.
Since the launch of OpenAI’s ChatGPT in November 2022, Google has been racing to produce AI software rivaling what the Microsoft-backed company had introduced.
GET FOX BUSINESS ON THE GO BY CLICKING HERE
When Google released its generative AI chatbot Bard a year ago, the company shared inaccurate information about pictures of a planet outside the Earth’s solar system in a promotional video, causing shares to slide as much as 9%.
Bard was re-branded as Gemini earlier this month and Google has introduced three versions of the product at different subscription tiers: Gemini Ultra, the largest and most capable of highly complex tasks; Gemini Pro, best for scaling across a wide range of tasks; and Gemini Nano, the most efficient for on-device tasks.
Fox Business’ Joseph Wulfsohn, Nikolas Lanum and Reuters contributed to this report.