ETV Bharat / technology

Google Explains What Went Wrong with Gemini AI Image Generation

author img

By IANS

Published : Feb 24, 2024, 9:59 AM IST

Google has halted Gemini's image generation of people while improving its response accuracy. The company has faced criticism for potentially violating Indian IT laws and accusing Google of running "racist, anti-civilisational programming." Google's Senior Vice President, Prabhakar Raghavan, acknowledged the issue.
Representational Picture

Google has halted Gemini's image generation of people while improving its response accuracy. The company has faced criticism for potentially violating Indian IT laws and accusing Google of running "racist, anti-civilisational programming." Google's Senior Vice President, Prabhakar Raghavan, acknowledged the issue.

New Delhi: As world leaders and industry stalwarts slammed Google over inaccuracies in its AI-generated historical images, the tech giant has tried to explain what exactly went wrong with its AI.

The company has made the decision to pause Gemini’s image generation of people while it works on “improving the accuracy of its responses”.While Union Minister of State for Electronics and IT, Rajeev Chandrasekhar, expressed concern over the potential violation of Indian IT laws by Google's Gemini AI chatbot, Tesla and SpaceX CEO Elon Musk accused Google of running "racist, anti-civilisational programming" with its AI models.

Prabhakar Raghavan, Senior Vice President at Google, admitted in a latest blog post that it is clear that “this feature missed the mark”.“Some of the images generated are inaccurate or even offensive. We’re grateful for users’ feedback and are sorry the feature didn't work well,” Raghavan said.

So what went wrong? “In short, two things. First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range,” explained Raghavan. And second, over time, the model became way more cautious than “we intended and refused to answer certain prompts entirely -- wrongly interpreting some very anodyne prompts as sensitive”.

The company said that it did not want it to create inaccurate historical -- or any other -- images. “So we turned the image generation of people off and will work to improve it significantly before turning it back on.

This process will include extensive testing,” said Raghavan.However, he said that he can’t “promise that Gemini won’t occasionally generate embarrassing, inaccurate or offensive results”. “But I can promise that we will continue to take action whenever we identify an issue,” Raghavan added.

Read More

  1. Reddit Strikes $60M Deal Allowing Google to Train AI Models on Its Posts, Unveils IPO Plans
  2. Navigating the Future of AI Amidst Human Folly, the Dual Edges of Progress
ETV Bharat Logo

Copyright © 2024 Ushodaya Enterprises Pvt. Ltd., All Rights Reserved.