Search
Browse By Day
Browse By Time
Browse By Person
Browse By Mini-Conference
Browse By Division
Browse By Session or Event Type
Browse Sessions by Fields of Interest
Browse Papers by Fields of Interest
Search Tips
Conference
Location
About APSA
Personal Schedule
Change Preferences / Time Zone
Sign In
X (Twitter)
Due to rapid advances in artificial intelligence (A.I.), campaigns and outside groups are already running ads that look and sound like actual candidates and events, but in fact are entirely fabricated. These A.I.-generated ads look and sound so real that it is becoming exceedingly difficult to discern fact from fiction.
When A.I.-generated content makes a candidate say or do things they never did – for the explicit purpose of damaging that candidate’s reputation – these ads are known as “deepfakes.” The practice is currently legal in federal elections and most states. These ads are not even subject to a disclaimer noting that the content never happened in real life.
Immediately following President Joe Biden’s announcement that he is running for reelection in 2024, the RNC produced its first entirely fabricated A.I. campaign ad. It pictured Biden and Vice President Kamala Harris laughing at their reelection party, then spanned into images of China bombing Taiwan, then pictures of a collapsing Wall Street financial market, films of 80,000 illegal immigrants flooding across the border, and finally to a police occupation of San Francisco.
All of it was fabricated, but many viewers thought some of it was real, even though the ad included a disclaimer (to the RNC’s credit).
Republican presidential candidate Ron DeSantis posted A.I.-generated images of former President Donald Trump having a friendly visit and hugging controversial health official Dr. Anthony Fauci. Trump responded with his own A.I.-generated ad of DeSantis enjoying the company of Elon Musk, George Soros, Dick Cheney, Adolf Hitler, and Satan.
On the eve of Chicago’s most recent city election, mayoral candidate Paul Vallas was depicted in an A.I.-generated ad using a voice identical to his own, condoning police brutality.
In an unregulated political environment, expect more deepfakes – many more. In the waning days before the election, it is reasonable to assume candidates, political parties, and especially outside groups will feel free to air deliberately deceptive deepfakes that depict opponents partying in an orgy, praising hostile foreign nations, and committing an assortment of felonies – all looking and sounding real.
A tsunami of A.I.-generated deception and misinformation can be expected as the 2024 election nears to the detriment of all parties involved, not least of which is the public’s confidence in the integrity of elections.
This paper documents the extent that AI-generated content has saturated campaign communications in the 2024 elections. The dangers of deepfakes to the integrity of elections and the threats posed to democracy itself are analyzed, and potential legislative and regulatory solutions are offered.