As AI-creation tools become more available, so too does the risk of deepfakes and AI-simulated misrepresentations, which could pose significant risks to democracy through misinformation.
Indeed, just this week X owner Elon Musk shared a video of US Vice President Kamala Harris making disparaging remarks about President Joe Biden, which many have suggested should be classified as a deepfake to avoid confusion.
Musk is essentially The suggestion was laughed at. Anyone could believe the video is real, claim it’s a parody, and argue that “parody is legal in the US,” but when you share an AI-generated deepfake with hundreds of millions of people, there’s a real risk that at least some of them will be convinced that it’s real.
So, although this example clearly appears fake, it highlights the risks of deepfakes and the need for proper labelling to limit their misuse.
This was proposed this week by a group of U.S. senators.
Yesterday, Senator Coons, Blackburn, Klobuchar and Tillis introduced the bipartisan “NO FAKES” Act, which would impose clear penalties on platforms that host deepfake content.
According to the announcement:
“The NO FAKES Act would make individuals or companies liable for damages if they create, host, or share digital replicas of people appearing in audiovisual works, images, or audio recordings that those individuals do not actually appear in or have authorized, including digital replicas created by generative artificial intelligence (AI). Online services hosting unauthorized replicas would be required to remove the replicas upon notification from the rights holder.”
Thus, the bill would essentially give individuals the power to request the removal of deepfakes that depict them in unrealistic situations, subject to certain exceptions.
As you might expect, there are parodies included:
“There are exceptions for First Amendment protections, such as documentary and biographical works, and for purposes of comment, criticism or parody. The bill also largely preempts state laws dealing with digital reproduction and creates a sensible national standard.”
Ideally, therefore, a legal process would be put in place to facilitate the removal of deepfakes, but specifically, AI-generated content could still proliferate, both under the enumerated exceptions and the legal parameters around proving that such content is in fact fake.
What happens if there is a dispute over the legitimacy of a video? Do platforms have the legal recourse to keep the content up until it is proven fake?
It seems there may be grounds to refute such claims rather than remove them upon request, but this means that some of the more effective deepfakes may still get through.
The main focus, of course, is on AI-generated sex tapes and misrepresentations of celebrities. In these cases, it seems like there are generally clear standards for what should be taken down, but as AI technology advances, I think there is some risk in actually establishing what is true and enforcing takedowns accordingly.
But either way, this is another step towards enabling enforcement of AI-generated image rights, and while there are some grey areas, it should at least result in tougher legal penalties for creators and hosts.
You can read the full proposed bill here.