First, the bad news: AI-generated images are incredibly hard to spot. Once-clear telltale signs like distorted hands and scrambled handwriting are becoming increasingly rare as AI models advance at a dizzying pace.
It is no longer clear which images were created using popular tools such as Midjourney, Stable Diffusion, DALL-E, Gemini, etc. In fact, AI-generated images are becoming more deceptive, creating a major problem of spreading misinformation. Fortunately, it is usually not impossible to identify AI-generated images, but it is more work than before.
AI Image Detector – Proceed with caution
These tools use computer vision to look at pixel patterns to determine the likelihood that an image was generated by AI, meaning that while AI detectors aren’t completely perfect, they’re a good way for the public to determine whether an image is worth scrutinizing, especially if it’s not immediately obvious.
“Unfortunately, with the human eye, studies have shown that there is a 50/50 chance that a person will get it,” said Anatoly Kvitnitsky, CEO of the AI image detection platform. AI or not?“But with AI detection in images, there are patterns like pixels, so even as models continue to improve, they’re still there.” Kvitnitsky claims that AI or Not achieves an average accuracy rate of 98 percent.
Other AI detectors that generally have high success rates include: Hive Moderation, SDXL Detector We tested all these detectors on 10 AI-generated images to see their effectiveness.
AI or not?
Unlike other AI image detectors, AI or Not simply returns a “yes” or “no,” but correctly indicates that an image is AI-generated. The free plan allows 10 uploads per month. When tested with 10 images, it had an 80 percent success rate.
AI or Not correctly identified this image as AI-generated.
Credits: Screenshot: Mashable / AI or Not
Hive Moderation
We tested Hive Moderation’s free demo tool on more than a dozen images and found that it had an overall success rate of 90 percent, meaning that the images were likely generated by AI, although we couldn’t detect any AI quality in an artificial image of a horde of chipmunks climbing a rock face.
As much as we’d like to believe that the chipmunk army is real, our AI detector was wrong.
Credits: Screenshot: Mashable / Hive Moderation
SDXL Detector
Hugging Face’s SDXL detector takes a few seconds to load and may produce errors on the first try, but it’s completely free and gives you a percentage of the probability instead: 70 percent of AI-generated images are likely to be generative AI.
The SDXL Detector correctly identified a Grok-2-generated image of Barack Obama in a public restroom.
Credits: Screenshot: Mashable / SDXL Detector
Illuminati
Illuminarty has a free plan that offers basic AI image detection capabilities. Of the 10 AI-generated images we uploaded, only 50 percent were classified with a very low probability. To the horror of rodent biologists, the infamous image of a rat penis was classified as AI-generated with a low probability.
Hmm, this looks like a layup.
Credits: Screenshot: Mashable / Illuminarty
As you can see, while AI detectors are very good in most cases, they are not foolproof and should not be used as the only way to authenticate images. They may be able to detect fake images that look real but are AI generated, and they may not be able to detect images that are clearly AI created. This is exactly why it is best to combine multiple methods.
More Tips and Tricks
Old-fashioned reverse image search
Another way to detect AI-generated images is a simple reverse image search, recommended by Bamshad Mobasher, a computer science professor and director of the Web Intelligence Center at DePaul University Chicago’s School of Computing and Digital Media. Uploading an image to Google Image Search or a reverse image search tool can help you track the image’s origins. If the photo ostensibly shows a real news event, “you may be able to determine that it’s fake or that the actual event didn’t happen,” Mobasher says.
Mashable Lightspeed
Google’s “About this image” tool
Google Search also has a “About this image” feature that provides contextual information like when an image was first indexed and where else it has appeared online. This can be found by clicking the three-dot icon in the top right corner of an image.
Signs visible to the naked eye
Speaking of which, AI-generated images are getting scarily good, but it’s still worth looking for the signs. As mentioned above, you might still see the occasional image with distorted hands, hair that’s a little too perfect, or text in the image that’s garbled or gibberish. An analysis from our sister site PCMag recommends looking for subjects with blurry or distorted objects in the background, or perfect (poreless, flawless) skin.
At first glance, the mid-journey image below looks like a Kardashian relative promoting a cookbook they could easily pull from Instagram—but look closer and you’ll see warped sugar jars, bent knuckles, and skin that’s a little too smooth.
If you look again, you’ll see that not everything is as it appears in this image.
Credit: Mashable / Midjourney
“AI is good at generating overall scenes, but it’s in the details that the devil is in the details,” Sasha Luccioni, AI and climate lead at Hugging Face, told Mashable in an email, looking for “mostly small inconsistencies like extra fingers, asymmetrical jewelry or facial features, or mismatched objects (like an extra handle on a teapot).”
Mobasher, who is also a fellow at the Institute of Electrical and Electronics Engineers (IEEE), said to zoom in and look for “odd little details” like a stray pixel or a mismatch, like a slightly mismatched earring.
“The same image can have parts of the same focal point that are blurry, but other parts that are incredibly detailed,” Mobasher said. This is especially true of the background of the image. “If there’s something like a sign with text in the background, it can look garbled or it might not even look like real language,” he added.
This image of Volkswagen vans parading on the beach was created by Google’s Imagen 3. The sand and buses look perfectly photorealistic, but if you look closely you’ll notice that the third bus’s VW logo is replaced with garbled symbols, and the fourth bus has amorphous specks.
I’m sure there’s been a VW bus parade at some point, but not this one.
Credit: Mashable / Google
Note the garbled logo and strange specks.
Credit: Mashable / Google
It all depends on AI literacy
None of the above methods will be of much use if, while consuming media, especially social media, we don’t first stop to consider whether what we’re looking at may have been generated by an AI in the first place. Similar to media literacy, a popular concept that became popular around the misinformation-ridden 2016 election, AI literacy is the first line of defense in determining what is true.
AI researchers Duri Long and Brian Magerko define AI literacy as “a set of capabilities that enable individuals to critically evaluate AI technologies, communicate and collaborate effectively with AI, and use AI as a tool online, at home, and at work.”
It’s important to know how generative AI works and what to look out for. “It may sound cliché, but taking the time to check the origins and sources of the content you see on social media is a good start,” says Luccioni.
First, ask yourself where the image in question came from and in what context it appeared. Who published the image? What does the accompanying text (if any) say about the image? Have other people or media also published it? How does the image or the accompanying text make you feel? If it seems like it was designed to offend or seduce you, consider why.
Organizations fighting the problem of AI deepfakes and misinformation
As we’ve seen, individuals’ methods for distinguishing AI images from real ones are currently patchy and limited. Worse yet, the spread of illegal or harmful AI-generated images is a double whammy, as their posts spread falsehoods and create distrust in online media. However, the rise of generative AI has spawned several efforts to increase trust and transparency.
The Content Provenance and Authenticity Coalition (C2PA) was founded by Adobe and Microsoft and includes technology companies such as OpenAI and Google, and media companies such as Reuters and the BBC. C2PA is a non-profit organization that promotes clickable Content Authentication We identify the origin of an image and whether it is an AI-generated image, but it is up to the creator to attach content authentication information to the image.
Meanwhile, Stanford University’s Sterling Institute is hard at work authenticating real-world images. The Institute verifies “sensitive digital records, including testimonials of human rights violations, war crimes, and genocide,” and stores the verified digital images securely in a distributed network to make them tamper-proof. While the Institute’s work is not user-facing, its project library is a good resource for anyone seeking to verify the authenticity of images, for example, of the Ukraine war or the presidential transition from Donald Trump to Joe Biden.
Experts often talk about AI images in the context of hoaxes and misinformation, but AI images everytime That in itself is meant to be deceptive. AI imagery could just be a joke or meme taken out of its original context, or a lame advertisement. Or it could simply be a form of creative expression using an interesting new technology. But for better or worse, AI imagery is now part of everyday life. And it’s up to you to tell the difference.
I’m paraphrasing Smokey Bear here, but I’m sure he’d understand.
Credit: Mashable / xAI
topic
Artificial Intelligence OpenAI