The FCC has issued a declaratory ruling, employing the protection of the Telephone Consumer Protection Act (TCPA) to outlaw robocalls that use AI-generated voices. The Commission’s unanimous decision was spurred by public fallout from the doctored audio message of a purported President Biden urging voters in New Hampshire not to vote in the state’s Democratic primary last month. The announcement makes clear that the potential for malicious actors to use AI to deceive voters and subvert democratic processes is on the government’s top-of-mind this election year. This is not the first time that the TCPA has been used to protect the public from election interference, but rather than go after individual actors for individual instances of election interference as it has in the past, this decision creates a much wider blanket ban on AI-generated voices in robocalls which will cover election-related AI-generated calls among others.

To say that 2024 is set to be a monumental year for elections and democracy worldwide is an understatement. Roughly half the world’s population spanning over 70 countries will be headed to the pols this year. As varied as these elections and political contexts are across the globe, one technological conundrum poses a unified threat to how elections and their outcomes will be perceived and their exposure to manipulation. Diffusion of tools and technology capable of generating realistic image, video, and audio content, or deepfakes, leaves the online and media environments susceptible to being drowned in misinformation; the rapid development and onset of this technology leaves most end users ill-equipped to differentiate AI-generated content from the real thing. The culmination of pairing a proliferation of deepfakes with peoples’ unpreparedness for coming face-to-face with them is a very real potential to sway any of the many elections set to take place this year.

True, disinformation is not a new phenomenon; it’s been around as long as politics. In the wake of the 2016 U.S. presidential elections, it became a hot topic as spectators grasped how 21st century technology had allowed disinformation to spread more widely and rapidly than ever before. Now, even newer technological innovations are relaunching a disinformation panic as powerful tools for creating convincing and ultra-realistic false content, which we are primed to readily believe, are added to the disinformation toolkit. The adage “seeing is believing” has so far enjoyed the status of a universal truism, only adding to the threat posed by deepfake technology.

The FCC’s decision targeting AI-generated robocalls is a start at addressing the threat, but just as we’ve grown accustomed to in the privacy law context, states are moving much more quickly than the federal government to introduce legislation regulating the use of deepfakes in elections. Most proposed state laws would require candidates put disclaimers on any AI-generated media; while a few others take a stronger stance, banning deepfakes intended to influence elections entirely, or certain categories of such deepfakes, within certain pre-election time windows.

Of course, politics is not the only realm in which deepfakes have been the subject of public concern lately. Celebrities and even non-famous individuals have also been the subject of deepfakes that use their faces and voices to create music and videos, in some cases including pornographic or other highly embarrassing material. States have also taken aim at these insidious uses of AI. California, Georgia, Hawaii, Illinois, Minnesota, New York, Texas, and Virginia have all passed laws that either criminalize certain uses of nonconsensual deepfakes or provide a private right of action for victims whose likenesses are used. Other states are considering similar measures, including the “Taylor Swift Act” in Missouri, and in January, legislation was proposed in Congress that would provide individuals with a property right in their own voice and likeness, with an accompanying private right of action for victims whose likeness is used misleadingly in a deepfake for any purpose.

On top of the FCC’s decision and state-led legislative pushes, we’ve seen other technical proposals, such as the watermarking efforts discussed in President Biden’s recent Executive Order on AI. While these measures are all positive steps in the right direction, more protection may be necessary to protect individuals from having their identity misused or from being duped by deceptive deepfakes during elections. We will continue to watch for new legislation or regulation in this area. Subscribe to our blog at for updates.