Did you notice that the image above was created by artificial intelligence? It can be difficult to spot AI-generated images, videos, audio, and text as technological advances make them indistinguishable from human-created content and more susceptible to manipulation by disinformation. However, knowing the current state of AI technology being used to create disinformation and the various signs that indicate what you’re seeing may be fake can help you avoid being fooled.
World leaders are concerned: A World Economic Forum report said misinformation and disinformation “has the potential to fundamentally disrupt electoral processes in multiple economies over the next two years,” while easier access to AI tools has “already led to an explosion in counterfeit information and so-called ‘synthetic’ content, from advanced voice clones to fake websites.”
While the terms misinformation and disinformation both refer to false or inaccurate information, disinformation is information that is deliberately intended to deceive or mislead.
“The problem with AI-enabled disinformation is the scale, speed, and ease with which campaigns can be launched,” said Hany Farid of the University of California, Berkeley. “These attacks no longer require state-sponsored actors or well-funded organizations; any individual with modest computing power can create large amounts of fake content.”
He says that generative AI (see glossary below) is “polluting the entire information ecosystem, calling into question everything we read, see and hear,” and that his research shows that in many cases, AI-generated images and sounds are “almost indistinguishable from reality.”
However, Farid and his colleagues’ research reveals that there are strategies people can follow to reduce the risk of falling for social media misinformation and AI-created disinformation.
How to spot fake AI images
Have you seen the photo of Pope Francis wearing a down jacket? These fake AI images are becoming more common as new tools based on viral models (see glossary below) allow anyone to generate images en masse from simple text prompts. One study by Nicolas Dufour and colleagues at Google found that the share of AI-generated images in fact-checked misinformation claims has risen sharply since the beginning of 2023.
“These days, media literacy requires AI literacy,” says Negar Kamali of Northwestern University in Illinois. In a 2024 study, she and colleagues identified five different categories of errors in AI-generated images (outlined below) and offered guidance on how people can spot these on their own. The good news is that their research suggests that people are currently about 70% accurate at detecting fake AI images. You can use their online image test to assess your own detective skills.
5 common types of errors in AI-generated images:
- Socio-cultural impossibilities: Does the scene depict behavior that is unusual, unusual, or surprising for a particular culture or historical figure?
- Anatomical irregularities: Look closely. Do the hands or other body parts look unusual in shape or size? Do the eyes or mouth look strange? Are any body parts fused together?
- Stylistic artifacts: Do the images look unnatural, too perfect, or too stylized? Does the background look odd or missing something? Is the lighting strange or variable?
- Functionality Impossibility: Are there any objects that look odd, unreal or non-functional? For example, a button or belt buckle in an odd place?
- Violation of Physics: Do the shadows point in different directions? Does the mirror’s reflection match the world depicted in the image?
How to spot deepfakes in videos
An AI technique known as generative adversarial networks (see glossary below) has allowed tech-savvy individuals to create video deepfakes since 2014, which is the digital manipulation of videos of existing people to swap out different faces, create new facial expressions, and insert new audio with matching lip sync. This has enabled a growing number of fraudsters, state-sponsored hackers, and internet users to produce video deepfakes, potentially resulting in celebrities such as Taylor Swift and ordinary people alike being unwillingly featured in deepfake porn, scams, and political misinformation and disinformation.
The same AI techniques used to spot fake images (see above) can be applied to suspicious videos, and researchers from the Massachusetts Institute of Technology and Northwestern University in Illinois have put together some tips for spotting these deepfakes, though they acknowledge that no technique works foolproof.
6 tips to spot AI-generated videos:
- Mouth and lip movements: Are there moments when the video and audio are not perfectly in sync?
- Anatomical defects: Does your face or body look strange or move unnaturally?
- face: Look for inconsistencies in facial smoothness, wrinkles around the forehead and cheeks, and facial moles.
- Lights up: Is the lighting inconsistent? Do shadows behave the way you expect them to? Pay particular attention to the person’s eyes, eyebrows, and glasses.
- hair: Does your facial hair look or move oddly?
- Blink: Blinking too much or too little can be a sign of a deepfake.
A newer category of video deepfakes is based on viral modeling (see glossary below), the same AI technology behind many image generators that can create fully AI-generated video clips based on text prompts. Companies are already testing and releasing commercial versions of AI video generators, making it easier for anyone to create video deepfakes without special technical knowledge. So far, the resulting videos tend to feature distorted faces and strange body movements.
“AI-generated videos are likely easier for humans to detect than images because they contain more motion and are much more likely to have AI-generated artifacts and impossibilities,” Kamali says.
How to spot an AI bot
Social media accounts controlled by computer bots are becoming commonplace across many social media and messaging platforms. From 2022 onwards, we will also see an increasing number of bots leveraging generative AI technologies such as large-scale language models (see glossary below), which make it easy and cheap to mass-produce grammatically correct, persuasive and customized AI-written content for a variety of contexts through thousands of bots.
“It’s now much easier to customize these large language models for specific audiences with specific messages,” says Paul Brenner of the University of Notre Dame in Indiana.
In their study, Brenner and his colleagues found that even though subjects were told they might be interacting with a bot, they were only able to distinguish between AI-powered bots and humans about 42 percent of the time. You can test your own bot-detection skills here.
Brenner said some strategies could help identify less sophisticated AI bots.
5 ways to tell if a social media account is an AI bot:
- Emojis and hashtags: Overuse of these can be a sign.
- Unusual phrases, word choices, and analogies: Unusual language can indicate an AI bot.
- Repetition and Structure: Bots may repeat words that follow a similar or fixed format, or may overuse certain slang terms.
- Ask a question: These may reveal the bot’s lack of knowledge on a topic, especially when it comes to local locations and situations.
- Assume the worst: If the social media account is not a personal contact and its identity has not been clearly verified or confirmed, it may be an AI bot.
How to detect audio duplication and audio deepfakes
Voice cloning (see glossary below) AI tools have made it easy to generate new voices that can imitate virtually anyone. This has led to a rise in audio deepfake scams that clone the voices of family members, company executives, and political leaders such as US President Joe Biden. These are much harder to identify compared to AI-generated videos and images.
“Voice clones are particularly difficult to distinguish between real and fake because there are no visual cues to help the brain make that decision,” said Rachel Toback, co-founder of white hat hacking group Social Proof Security.
These AI voice deepfakes can be difficult to detect, especially when used in video or phone calls, but there are some common sense steps you can take to help distinguish between real human voices and AI-generated ones.
4 steps to use AI to recognize if audio has been duplicated or faked:
- Public figures: If the audio clip is of an elected official or public figure, review whether what they say is consistent with what has already been publicly reported or shared about that person’s views or actions.
- Look for inconsistencies: Compare your audio clip to previously authenticated video or audio clips featuring the same person. Are there any inconsistencies in the tone or delivery of the voice?
- Awkward Silence: If you’re listening to a phone call or voicemail and notice that the speaker takes unusually long pauses while speaking, this could be due to the use of AI-powered voice duplication technology.
- Weird and redundant: Robotic or unusually verbose speech may indicate that someone is using a combination of voice cloning to mimic a person’s voice and large language models to generate accurate phrasing.
Technology is evolving
Currently, there are no consistent rules that can consistently distinguish AI-generated content from authentic human content. AI models that can generate text, images, videos, and audio will surely continue to improve and can quickly produce content that looks authentic without obvious artifacts or mistakes. “Recognize that, to put it mildly, AI is manipulating and fabricating images, videos, and audio, and it happens in under 30 seconds,” Tobac says. “This makes it easy for bad actors looking to mislead people to quickly subvert AI-generated disinformation, which can be found on social media within minutes of breaking news.”
While it’s important to hone our ability to spot AI-generated disinformation and learn to ask more questions about what we read, see and hear, ultimately this alone won’t be enough to stop the damage, and the responsibility for spotting it can’t be placed solely on individuals. Farid is among a number of researchers who argue that government regulators should hold accountable the big tech companies that have developed many of the tools that are flooding the internet with fake, AI-generated content, as well as startups backed by prominent Silicon Valley investors. “Technology is not neutral,” Farid says. “The tech industry is selling itself as not having to take on the responsibilities that other industries take on, and I totally reject that.”
Diffusion Model: An AI model that learns by first adding random noise to data (such as blurring an image) and then reversing the process to recover the original data.
Generative Adversarial Networks: A machine learning technique based on two neural networks that compete by modifying the original data and attempting to predict whether the generated data is genuine or not.
Generative AI: A broad class of AI models that can generate text, images, audio, and video after being trained on similar forms of content.
Large-scale language models: A subset of generative AI models that can generate different forms of written content in response to text prompts, and in some cases translate between different languages.
Voice CloneA potential way to use AI models to create a digital copy of a person’s voice and generate new voice samples with that voice.
topic: