A few weeks ago, a Google search for “deepfake nude Jennifer Aniston” brought up at least seven top results that purported to be AI-generated explicit images of the actress, but now they’ve disappeared.
New tweaks to how search results are ranked, introduced this year, have already reduced how often fake explicit images appear when searching for content about a person by more than 70 percent, according to Emma Higham, a product manager at Google. Whereas problematic results may have appeared in the past, Google’s algorithm now prioritizes news articles and other non-explicit content. A search for Aniston now results in links to articles such as “Taylor Swift Deepfake AI Porn Threat” and a warning from the Ohio Attorney General about “deepfake celebrity endorsement scams” targeting consumers.
“These changes will allow people to read about the impact deepfakes are having on society, rather than viewing pages full of fake, non-consensual images,” Higham wrote in a company blog post on Wednesday.
The ranking changes come in the wake of a WIRED investigation this month which revealed that Google executives had rejected numerous ideas put forward by staff and outside experts to address the growing problem in recent years of intimate depictions of people being spread online without their permission.
Google has made it easier to request the removal of unwanted explicit content, but victims and advocates have called for more aggressive measures. But the company has tried to avoid overly censoring the internet or hindering access to legal pornography. A Google spokesperson said at the time that multiple teams were working hard to strengthen safeguards against so-called non-consensual explicit imagery (NCEI).
Victim advocacy groups say NCEI is on the rise due to the widespread use of AI image-generating tools, some of which have few restrictions on their use, that make it easy to create fake, explicit images of anyone, from middle school classmates to A-list celebrities.
A WIRED analysis in March found that Google had received more than 13,000 requests to remove links to the 12 most popular websites hosting explicit deepfakes, and Google removed the results in about 82 percent of cases.
As part of Google’s new crackdown, Higham said the company will begin applying three of the measures it uses to make it harder to find real but unwanted explicit images to synthetic, unwanted images: After accepting requests to remove sexually explicit deepfakes, Google will stop duplicate images from appearing in search results; it will also filter explicit images out of search results similar to the queries cited in the removal requests; and finally, websites that receive “high volume” successful removal requests will be demoted in search results.
“These efforts are intended to provide additional reassurance to people who are particularly concerned about seeing similar content about them in the future,” Higham wrote.
Google acknowledges that its measures aren’t perfect, and former employees and victim advocates say it should go further. The search engine prominently warns people looking for nude images of children in the United States that such content is illegal. The effectiveness of the warning is unclear, but advocates support it as a potential deterrent. But searches for adult sexual deepfakes don’t show a similar warning, despite laws banning the sharing of NCEI. A Google spokesperson confirmed that this will remain the case.