Just recently, Google and its Gemini software came under fire following complaints from users that the company’s AI model was deliberately refusing to generate historically-accurate depictions of people based on ethnicity. Following the controversy, Gemini’s ability to depict images of people was put on hold, followed by an official message from Prabhakar Raghavan, Senior Vice President at Google. Part of his post reads:
“Three weeks ago, we launched a new image generation feature for the Gemini conversational app (formerly known as Bard), which included the ability to create images of people.”
“It’s clear that this feature missed the mark. Some of the images generated are inaccurate or even offensive. We’re grateful for users’ feedback and are sorry the feature didn’t work well.”
“We’ve acknowledged the mistake and temporarily paused image generation of people in Gemini while we work on an improved version.”
Raghavan adds that the Gemini team’s tuning caused the model to become exceedingly cautious than expected, refusing to answer certain prompts and wrongly interpreting some as sensitive. These then resulted in the model to overcompensate in some cases, and be over-conservative in others, leading to erroneous results at times.
With all that said, Raghavan states that the situation was not what the folks over at Google intended, resulting in Gemini’s human image generation feature being turned off, with the goal of improving it significantly before turning it back on with extensive testing.
Source: Google