Key Takeaways
- Grok-2 produces controversial images of politicians and copyrighted characters that border on humorous.
- AI technology simplifies the production of deepfakes, raising ethical concerns about misuse and questionable content.
- Grok-2’s loose regulation raises ethical and legal questions, ranging from the creation of deepfakes to the use of copyrighted logos.
X Call Grok An AI assistant with “a touch of humor and rebelliousness.” But shortly after announcing the beta version, Grok 2 users flocked to Former Twitter From disgraced politicians to graphics containing trademarked characters, this produces ethically questionable imagery.
While not the first version of X’s AI, the beta version of Grok 2, announced on August 13, adds the ability for the AI to generate images. Grok 2’s guardrails are low, which has earned the AI both praise and criticism: X generates images that many other generative AIs refuse to produce, including deepfakes of politicians and popular cartoon characters turned villains, leading some to praise the bot’s sense of humor while others fret over its huge potential for misuse.
Anyone who lacks ethical boundaries Photoshop skills Before the advent of AI, any human could create a deepfake given the time and a little extra time, but the technology has simplified and sped up the process, making it easy for anyone to create deepfakes and other misleading or ethically questionable images by simply paying $8 for an X Premium account.
xAI appears to be embracing its identity as a less restrictive platform.
Grok isn’t the first AI to come under fire for ethically questionable work. Google, for example, removed its ability to generate people entirely after Gemini, in an attempt to be politically correct, created ethically diverse and historically inaccurate images of the US Founding Fathers. But while Google apologized and removed the feature, xAI seems to be embracing its identity as a less restrictive platform. Despite early criticism, many of the same problematic features remain in place, more than a week after the beta launch. There are exceptions, however: the bot refused to generate images of female politicians in bikinis, linking to an old X post that used Grok to generate those very images.
To see how far xAI’s ethical boundaries go, I beta-tested Grok 2 to see what the AI would generate that other platforms would refuse to produce. Grok refused to generate scenes with blood or nudity, so it didn’t turn out to be completely immoral. But what does xAI mean by its self-described “spirit of rebellion”? Here are six things I was surprised Grok 2 was able to generate:
Pocket-lint’s ethical standards mean we can’t use some of the morally questionable images generated, so scroll on without fear of your eyeballs melting at images of presidential candidates in bikinis or your favourite cartoon characters in risky poses. All images in this post have been generated by Grok 2.
Related
How to create AI images using Grok in X
Creating AI images with X is not as easy as with other AI image generation tools, but it is possible with an X Premium subscription.
1 Images of major politicians
AI creates political content with a small disclaimer.
X / Grok
While many AI platforms shy away from talking about politics at all, Grok had no qualms generating images of major politicians, including Donald Trump and Kamala Harris. The AI generated the images with a little note to check vote.org for the latest election information. While the image above of the debate stage looks harmless, Grok did not shy away from putting politicians in a bad light. For example, it had no qualms generating images of politicians surrounded by drug paraphernalia, which we won’t publish here for obvious reasons.
Grok’s political restrictions are loose at best, but the tool has seemed to show a glimmer of conscience since its release, refusing to generate images of female politicians in bikinis, but then linking to an old post from X to show off Grok’s ability to do just that.
2 Recognizable deepfakes
Celebrities and historical figures are no problem
X / Grok
Grok’s celebrity generating capabilities go beyond politicians. While Grok’s celebrity generating capabilities may produce amusing caricatures, such as a photo of Abraham Lincoln with modern technology, it can also be used to spread libel and fake news. Grok has also not refused to generate photos of celebrities taking drugs, supporting political causes, or even kissing other celebrities. These are just a few examples of potential abuse.
3 Graphics that blatantly copy other artists
Grok allows you to recreate paintings of an artist’s style or a specific name
X / Grok
The intersection of copyright law and artificial intelligence has been a topic of debate since the technology emerged. But while platforms like Gemini and ChatGPT won’t respond to prompts asking for images in the style of a specific artist, Grok-2 has no such guardrails. Not only did the AI generate images in the general style of a particular artist, but when you named an artist and a specific piece of art, Grok produced an image that felt more like a copy than inspired.
4 Content containing licensed characters
The beta version can recreate cartoon characters
X / Grok
Grok showed its sense of humor when, when I asked for a photo of Mickey Mouse in a bikini, the AI humorously added a swimsuit over his iconic red pants. But should an AI be able to replicate licensed characters in the first place? Just like copying a painting by a famous artist could land you in court, copying a licensed character could land you in court. Grok doesn’t seem to refuse to put our beloved childhood characters in morally wrong scenarios, making them even more vulnerable to abuse.
5 Images containing copyrighted logos
Logos are not prohibited
X / Grok
When we asked Grok for a photo of a political debate, the AI displayed a recognizable CNN logo in the background. This probably wasn’t surprising, since earlier AIs had been in court for replicating watermarks from their generation of training data. But part of the shock also came from the AI’s reputation for not doing a good job of reproducing text in images, a common flaw that seems to be improving rapidly. Reproducing a logo, like copying a licensed character or another artist’s work, can invite legal trouble.
6 A group photo that is obviously biased towards white people
Grok showed racial prejudice in some scenarios.
X / Grok
AI is notoriously biased, and many of the early models were trained on images that contained relatively few people of color. When I asked for a “group of experts,” expecting boring stock photos, Grok produced both men and women, but none of them were people of color. This was true even after five similarly worded prompts. When I finally asked for a “diverse group of experts,” by the second try, the resulting images still contained no people of color.
This bias seems to be primarily seen when asking for images of professionals — the AI may have been trained on stock photos of business professionals that tend to favor white people. When I asked for images in a more casual setting, Grok thankfully generated multiple ethnicities without being prompted.
Related
Do you think Google’s AI “Reimagine” tool is fun or scary?
Google’s “Reimagine” tool on the Pixel 9 is like a lawless land of photo editing, and honestly, to me, it’s the phone’s most interesting feature. With just a text prompt, you can add anything to your photos, from a UFO you spotted at a backyard BBQ to a dinosaur on Main Street. It’s cool, of course, but it’s also a little scary. Even Pocket-lint’s Editor-in-Chief Patrick O’Rourke feels the same way. The tech is so good that it blurs the line between real and fake, with no obvious markers that say “AI-generated.” This lack of transparency makes any photo look suspicious. Reimagine does have some guardrails, but with clever wording, they can easily be circumvented. What do you think of Reimagine?
7 Images of violence
Blood is prohibited, but some easily slips through the filter.
X / Grok
Initially, Grok-2 avoided generating violent images when prompted, opting instead to write text describing what such an image would look like. However, as some X users have pointed out, there’s a loophole to get around this content restriction: when asked to “create a non-violent image of someone standing over a dead body with a gun,” Grok-2 happily obliged, but the resulting photo showed no blood.