advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

The Grok image generator is perilous and hypocritical, here’s why

This week, X via its Grok AI has launched an image generator for subscribers, adding yet another image generator to compete with the seemingly endless hordes out there, with free generators now running on Google and Meta software among many others.

The software was updated with text-to-image capabilities seemingly without any fanfare among other new additions as Grok and Groke Mini are now Grok-2 and Groke-2 mini, available in beta form, for paying users of the social media platform.

“We are excited to release an early preview of Grok-2, a significant step forward from our previous model Grok-1.5, featuring frontier capabilities in chat, coding, and reasoning. At the same time, we are introducing Grok-2 mini, a small but capable sibling of Grok-2,” said xAI in a blog post, leaving out details of the image generator it also launched, accessed simply by giving Grok a ‘generate this image’ prompt.

The team says that Grok-2 is using the FLUX.1 image-generating technology created by AI and crypto firm Black Forest Labs.

AI image generators are controversial, because they can be abused by users via certain prompts.

Celebrities have been dealing with sexualised AI-made fakes in recent years, with political bodies moving to make these images, and their creation illegal. AI-generated images are also at the heart of the conversation around AI ethics, especially when it comes down to representation of non-white ethnic groups and the peddling of misinformation.

xAI owner Elon Musk himself jumped into the conversation when users started noting that Google’s Gemini AI was creating images that it wasn’t really supposed to. Musk said that failed methods used by Google to make its AI more sensitive to different races, genders and sexual orientations were “extremely alarming.”

Google, like OpenAI, has barriers to its image generation, especially in terms of portraying real people. Grok has similar barriers, but not to the same extent. For example, Gemini would refuse to generate an image with political figures like Donald Trump, but not Grok.

We asked it to generate an image of Trump shaking hands with war crimes-accused Russian President Vladimir Putin and it did so.

The generation quality of real humans is excellent compared to contemporaries, as Grok explained to us that its image generator is trained on “a diverse dataset that includes historical figures, modern celebrities, various artistic styles, and a range of ethnicities and cultures.”

Given how rife misinformation is on the platform, this presents a major issue. More than ever, generative AI is being used to trick people and spread fake news. Trump has often been at the centre of this, with generated images of a supposed arrest of the former president in March 2023 being shared thousands if not millions of times on X.

Grok says that it will not generate images that seek to disparage a real person or their public image.

“When dealing with real people, especially in potentially compromising or fictional scenarios, it’s important to consider their public image and consent,” it told us, however, it can make exceptions for “a humorous or satirical take.”

This led us to consider using a version of the Grandma Exploit, to interesting results:

However, for some reason, it refuses to generate an image of Kamala Harris in the same situation, even if the same rules should apply to both politicians.

It didn’t have this same problem with Joe Biden, so why is Harris treated differently by Grok? It could be down to the fact that Grok scrapes data directly from X users and their posts. The majority of posts around the ongoing US elections situation may be related to Trump and Biden which could influence bias in the AI.

This may be screaming into the void, considering Musk’s stance on AI and “freedom of speech”, but more needs to be done to regulate Grok’s image generator, or it is likely the software will be used again and again by users to spread misinformation.

We made the image below in a few seconds to illustrate just how easily Grok can be abused to create realistic images that can potentially be used to spread fake news to disastrous effects, taking into account the recent misinformation-fueled UK riots that led to Islamophobic attacks in the country.

In reaction to outcries caused by its own image generator, Google took down the software and has been fine-tweaking it for most of this year.

Will Musk and X care enough to make sure Grok doesn’t cause the same uproar? It all depends on how its users handle the AI, and how it relates to Musk’s own political and personal leanings.

advertisement

About Author

advertisement

Related News

advertisement