Google’s Gemini AI has a racism problem

  • Google has paused its Gemini AI’s ability to generate images as it refused to generate white people for many users.
  • The situation has caused an outcry on social media with even the likes of Elon Musk alleging that it is some form of “wokeness” by Google.
  • Some of these same people have responded with their own racist posts.

Earlier in February, Google launched the ability for users to generate images in its Gemini AI platform through prompts, similar to what is offered by Microsoft Copilot and ChatGPT Premium, but the tech giant soon decided to pause the feature altogether after inaccurate depictions that have some claiming racism.

“We’re aware that Gemini is offering inaccuracies in some historical image generation depictions,” said Google last week. These inaccuracies emerged for some users who prompted the AI to depict the “original founding fathers,” or “depict vikings” or “the pope” – except for the most recent pope, all of which were historically caucasian.

Gemini depicted these figures as people of colour, at least users on social media are alleging.

This revelation has caused an outcry on X, especially among its right-wing users. It seems that the Gemini image generator has a series of rules in place to include a diverse group of ethnicities and genders in its generations, so if you include general prompts like “generate an image of vikings” and don’t specify “white vikings” it will apply this rule.

Meanwhile, other users are outright claiming that Gemini refuses to generate images of white people. One user posted in X apparent screenshots of them asking Gemini to generate an image of a successful white businessman only to be told by the AI that the request goes against its principles of avoiding racial bias. It has no issues generating a black man, however.

This has led to some of the images and other posts including the chatbot circling on social media, which have users claiming Google has “gone woke.” This includes X owner Elon Musk who has chimed in on a few screenshots of Gemini being tricked into scenarios where its diversity principles get in the way of logical responses.

It is important to note that this is being done by users in bad faith who want to stoke the fires of racism, bigotry and wokeness.

In a now-limited post on X, probably due to harassment, Senior Director of Product for Gemini, Jack Krawczyk, said “Historical contexts have more nuance to them and we will further tune to accommodate that.”

It seems to us that this issue is caused by an over-correction on Google’s part. AI bias is one of the leading negatives to the technology and in particular to its generation of images. A Bloomberg report from 2023 showed how image generator StableDiffusion would generate images of white people when asked to depict more successful professions and generate images of people of colour when asked to depict less successful ones.

So CEOs are always white men and fast food workers are always black women, at least according to the AI. This is because of selection bias. There are more depictions of white male CEOs in the media than black woman CEOs. This influences the training data of the AI. We believe Google put the rules in place to try and stave off some of this selection bias, but it has backfired.

The company have not indicated when the Gemini image generator will be up and running again. In response to the perceived bias against white people, some users were proudly posting incredibly racist depictions in an attempt to “own” Google and Gemini.

The entire situation presents another difficult problem for Google when it comes to its generative AI. When it first launched Bard, the model that would eventually become Gemini, last year, a spate of very public inaccuracies from the chatbot sent company shares sliding and had users questioning the accuracy of the Google AI, especially in comparison with its rivals.

Google was on the back foot in the generative AI race, arriving late to the table where OpenAI was already licking the plate, and Microsoft had just started with the chocolate parfait. Google has been rushing to catch up, which could explain why it has had to deal with inaccurate responses and now this issue with its AI principles.

The tech giant has a racism problem with its AI, but this problem stems from some of its userbase and not its principles.

[Image – @BasedTorba on X]


About Author


Related News