When it Comes to Understanding AI’s Impact on Elections, We’re Still Working in the Dark

March 4, 2025  ·   Policy

Greater transparency around AI-generated political advertising would transform researchers' ability to understand its potential effects on democracy and elections.

Hand placing ballot into a box.

Credit: Element5 Digital

This article was originally published on Brookings.

Ahead of the 2024 U.S. election, there was widespread fear that generative artificial intelligence (AI) presented an unprecedented threat to democracy. Just six weeks before the election, more than half of Americans said they were “extremely or very concerned” that AI would be used to spread misleading information. Intelligence officials warned that these technologies would be used by foreign influence campaigns to undermine trust in democracy, and that growing access to AI tools would lead to a deluge of political deepfakes.  

This premature, “sky is falling” narrative was based on very little evidence, something we warned about a year ago. But while it seems clear that the worst predictions about AI didn’t come to pass, it’s similarly impetuous to claim that 2024 was the “AI election that wasn’t,” that “we were deepfaked by deepfakes,” and that “political misinformation is not an AI problem,” as some observers have stated. 

In reality, too little data is available to draw concrete conclusions. We know this because, for the past several months, our research team has tried to build a comprehensive database tracking the use of AI in political communications. But despite our best efforts, we found this task nearly impossible, in part due to a lack of transparency from online platforms. Overall, we found just 71 examples. Other researchers and journalists have tried to track election-related AI content as well with similar outcomes. But it doesn’t need to be this way. As lawmakers at the state and federal level continue to regulate AI, there are common-sense changes policymakers and platforms can make so that we aren’t flying blind trying to understand what impact AI has had, or will have, on elections.  

The AI political archive

For more than a decade, academic researchers have used large-scale social media data to investigate and understand online political discourse. The advent of generative AI comes at a time when researchers have been increasingly cut off from social media data sources. That’s why we at New York University’s Center for Social Media and Politics (CSMaP) and the Center on Technology Policy (CTP) partnered with the American Association of Political Consultants (AAPC) to collect the data ourselves and build the AI Political Archive.  

We were motivated, in part, by the overblown narratives focusing on the most harmful cases of AI deepfakes. Our goal was to combat speculations about AI’s impact with robust evidence of the technology’s use at scale. In addition, with the media covering deepfakes of national politicians, we specifically aimed to capture a fuller range of uses of generative AI in down-ballot races, where it had been hypothesized the technology may have greater impact—for good and ill. 

Our work at CSMaP has always relied on robust data when assessing digital media’s impact, so we partnered with AAPC, the largest organization of political consultants, to get content straight from the campaigns making it. We also opened our archive to submissions from the public to crowdsource additional examples. But in the end, the majority of our data collection and labeling was done by NYU research assistants. This was in large part because we received very few submissions from political consultants, who may have been hesitant to disclose the use of AI technologies because voters’ perceptions of them are largely negative.  

An added challenge of doing this work is that researchers have lost access to large-scale data about user experiences on social media over the last couple years, which forced us to turn to alternative sources. While some of the largest platforms still provide archives of political ads, holes in the data and design limitations undermine their utility for research. Our team used Meta’s Ad Library and Google’s Ads Transparency Center to manually look for examples of AI use in political communications—a painstakingly slow process that required us to sift through all of the political advertisements on these sites looking for a small, gray line that read “digitally created.” We tried exporting all this content to search for keywords, but AI labels weren’t included in the ads’ metadata (the descriptive fields that provide information about a piece of content), making it impossible to filter for them.  

Examples of AI use in elections

In total, our archive ended up with 71 examples of AI applications. Many of the instances we tracked had already been covered nationally: Donald Trump’s bizarre array of social media posts with largely satirical AI-generated images and the now infamous Biden deepfake robocall. We did, however, uncover some interesting instances of both pro- and anti-democratic uses of AI in down-ballot races: county and state campaigns that disclosed using digital creation tools in their video ads on Meta platforms, a PAC using a deepfake in an attack ad on X, a congressional candidate who used AI to overlay different scenes onto a video advertisement, a gubernatorial candidate who created deceptively altered images, and even a voter pamphlet that was reportedly drafted with the help of a large language model.  

Based on the glimpses available, there have been many takes on how AI affected the election on a national level. The general consensus, and largely what we saw in the submissions to our archive, was that there wasn’t widespread use of AI in the general election to deceive voters. Rather, “AI-generated media have been used for transparent propaganda, satire, and emotional outpourings,” as Matteo Wong wrote in The Atlantic.  

Although these snapshots are helpful, the fragmented and opaque nature of our information environment has made it exceedingly difficult to create a comprehensive collection for understanding AI’s impact at scale.  

And we’re far from the only ones who have tried to do this: Wired, Rest of World, The German Marshall Fund, and a team of researchers at Purdue University (to name a few) have all tracked election-related AI content in the U.S. and abroad. Each team used a different approach and leveraged different resources. But a few months out from the presidential election, our database, and three of the ones mentioned above, each have fewer than 100 examples.  

The team at Purdue, which took a much wider scope by including not only campaign communications but also content posted by average users, was able to track just under 500 examples going as far back as 2017. Even still, this is nowhere near representative of the actual range and scale of AI-generated political content online. 

Political disclosures require metadata

Fortunately, we can change this. Both Google and Meta already collect data on and label political content that was made or augmented with AI. They also both require some political advertisers to disclose the use of digital creation tools. But the issue for researchers is that that information isn’t available systematically or in a machine-readable format like metadata. For example, we can download a spreadsheet with political ads on Facebook, but there is no column in that spreadsheet indicating if AI was used. While labeling content for users is an important step in building transparency, without the metadata to study that labeled content at scale, it’s nearly impossible for researchers to gather evidence necessary for systematic research. This should be a simple fix. A binary field indicating whether synthetic content generation was used in a political ad would transform our ability to track AI’s role in political communications going forward.  

Unfortunately, this lack of transparency is part of a larger trend. While major social media platforms like X and Facebook used to provide data for independent researchers, they’ve recently eliminated or severely curtailed those tools, and continue to cut back on fact-checking and content moderation. This lack of transparency leads to news narratives and policy decisions that rely on conventional wisdom, rather than on informed evidence.  

It’s unlikely that any comprehensive tech regulation will pass at the federal level in the next few years. So as states continue to take the lead on AI policy, they should consider mechanisms that could increase transparency, such as creating public archives for digital political ads or requiring platforms to disclose AI use in online political advertising, specifically in a machine-readable format.   

As policymakers continue to regulate AI, they should remember that much of what we think we know is informed by piecemeal snapshots and media narratives rather than by comprehensive evidence. Many have already been quick to claim that AI is undermining democracy, while others say that these fears are overblown. But without policies that require greater cooperation and transparency from both platforms and advertisers themselves, researchers, journalists, policymakers, and the public are all working in the dark to understand AI’s impact.