Testifying before the Georgia Senate Judiciary Committee on January 29, Rep. Hunt Blackwell urged lawmakers to eliminate the bill’s criminal penalties and add an exception for news organizations that want to republish deepfakes as part of their reporting. The Georgia legislative session ended before the bill could be debated.
Federal deepfakes law Deepfakes are also likely to face resistance. In January, lawmakers introduced the Stop AI Fraud Act, which would give property rights to people’s likenesses and voices. This would allow people depicted in any kind of deepfake and their heirs to sue those involved in the creation and distribution of the counterfeits. These rules are intended to protect people from both pornographic deepfakes and artistic imitations. A few weeks later, the ACLU, the Electronic Frontier Foundation, and the Center for Democracy and Technology filed a written opposition.
Along with several other groups, they argued that these laws could be used to suppress much more than just illegal speech: The mere possibility of facing a lawsuit could discourage people from using technology for constitutionally protected activities, like satire, parody, and expression of opinion, the letter argues.
“The Stop AI Fraud Act explicitly recognizes First Amendment protections for speech and expression in the public interest,” bill sponsor Rep. Maria Elvira Salazar noted in a statement to WIRED. Rep. Yvette Clark, who proposed a similar bill to require deepfakes that depict real people to be labeled, told WIRED that it had been amended to include exceptions for satire and parody.
In interviews with WIRED, ACLU policy advocates and litigators said they aren’t opposed to limited regulation targeting non-consensual deepfake pornography. But they pointed out that existing anti-harassment laws provide a robust (if quirky) framework for addressing the issue. “Of course, there will be issues that existing law can’t regulate,” Jenna Leventoff, senior policy counsel at the ACLU, told me. “But as a general rule, we think existing law is sufficient to address many of these issues.”
But this is far from a consensus among legal scholars. Mary Ann Franks, a law professor at George Washington University and a leading advocate of strict deepfake measures, told WIRED in an email: “The obvious flaw in the argument that ‘we already have laws that address this’ is that if this were true, the explosion in this kind of abuse would not have been accompanied by an increase in criminal prosecutions.” Franks said that prosecutors in harassment cases generally must prove beyond a reasonable doubt that the suspect intended to harm a specific victim, a high bar because the perpetrator may not even know the victim.
Franks added: “Victims of this abuse consistently tell us that they have no clear legal remedy — and they know it.”
The ACLU They have not yet sued the government over generative AI regulations. A representative for the group declined to say whether they were preparing a lawsuit, but said the national office and several affiliated groups are closely watching legislative developments. “If any issues arise, we tend to act quickly,” Leventov assured me.