- Home  /  
- Impact  /  
- News & Commentary  /  
- The Limited Room for Russian Troll Influence in 2016
The Limited Room for Russian Troll Influence in 2016
Coordinated campaigns by sock puppets on social media are likely neither necessary nor sufficient to signify serious foreign threats to electoral integrity.
Authors
Area of Study
Tags
This article was originally published at Lawfare.
In partnership with the Stanford Internet Observatory, Lawfare is publishing a series assessing the threat of foreign influence operations targeting the United States. Contributors debate whether the threat of such operations is overblown, voicing a range of views on the issue. For more information on the series, click here.
Has the impact of foreign electoral interference on social media been overblown? This is a complex question that requires an understanding of both national security priorities and the empirical determinants of voting behavior. I’m part of team “overblown” in this debate but want to be explicit about exactly what it is I think is overblown. Thus, it is important that I begin by defining the scope of my answer to this question with two sets of branching qualifications.
First, foreign interference in U.S. elections can be understood as being threatening along two dimensions: (a) as an inherent threat to national security because it represents an attempt by a foreign adversary to interfere with the way U.S. citizens select their political leadership and (b) because of the actual impact of that interference on the outcome of the election. I am very much not addressing the former here and will focus instead on the latter.
Second, there are two different ways to think about the impact of Russian Internet Research Agency (IRA) electoral interference in 2016, each focusing on a different aspect of IRA activity. The first, which initially received much less popular attention but has since been laid out meticulously by Kathleen Hall Jamieson in her book “Cyber War,” is that Russia’s hacking and leaking of sensitive personal materials may have altered the media narrative in the crucial last weeks of the campaign. In particular, Jamieson suggests that the hack-and-leak blunted what might have been a potentially fatal blow to the Trump campaign from the Access Hollywood tape. The second, which has received a great deal of both popular and scholarly attention, is that “sock puppet” (or false identity) social media accounts—the now famous IRA trolls—actually changed voter attitudes and behavior. My “overblown” position is focused on this latter claim. More specifically, I lay out the case for why it is unlikely that these online IRA trolls had any significant impact on the election outcome.
In a highly polarized political society, it is difficult to change anyone’s vote, period. Ask yourself how many people you know who went into this election period unsure whether they were going to vote for Trump, Biden or a third candidate. How about people unsure if they were planning to vote at all? On Jan. 1, 2016, careful observers probably could have accurately predicted the vote of the vast majority of the electorate, and the same still holds this election cycle. Moreover, the prevailing wisdom in the political science community is that campaigns—and the billions of dollars spent on them—have fairly little impact on the way elections turn out because they largely cancel each other out. Thus, the number of votes that could have been affected by any influence campaign, even a successful one, was limited from the start.
Moreover, for all the ads, tweets and Facebook posts that IRA trolls produced, these were undoubtedly just a tiny fraction of the content the average American saw during the election. IRA spending on ads was orders of magnitudes lower than the spending by the campaigns. IRA posts on Twitter were dwarfed by posts from politicians and news outlets. And as we know from prior research in our lab, a lot of what the IRA troll accounts were spreading was links to U.S. national and local news. Put another way, they were largely pushing information that was already in the news ecosystem and available to Americans. Thus, any claim that individual behavior was altered by Russian IRA tweets would rest on the assumption that a tiny fraction of the overall content about the election that one saw online was enough to change one’s behavior—and this is on top of the fact that few people change their vote preference over the course of the election.
In addition, most of what we know to date about exposure to and engagement with low-quality online information sources (“fake news”) in the course of the 2016 election campaign suggests that it was highly concentrated—that is, a small number of users accounted for the vast majority of engagements. If the same pattern holds for Russian IRA behavior—and research in progress in our lab suggests that this is likely to have been the case on Twitter, at least—then for the average person, the number of exposures to troll-amplified content would be even lower.
Let’s take a step back and look at this all together. We have a situation where exposure to IRA trolls posts on social media was likely a tiny fraction of what people saw on social media during the campaign (to say nothing of what they saw elsewhere online, on television and in the press, and offline), that even this tiny fraction was likely highly concentrated among small numbers of users, and that this was all set against the background of a highly polarized electorate in which most people had made up their minds for whom to vote before the campaign even started. Is it possible that there were some individuals who may not have been planning to vote but were likely Trump supporters and the exposure to Russian IRA troll posts was what pushed them over the edge to turn out instead of staying home? Or that the same held for someone vacillating among Hilary Clinton, Jill Stein and not voting at all? Maybe. Is it likely that this would have happened to thousands of people (to say nothing of the tens of thousands of voters in the specific states that would have been necessary to sway the outcome of the election)? Doubtful.
There are legitimate reasons to be wary of foreign electoral interference, and there are good reasons to suspect that actions by foreign adversaries may have changed the media narrative in 2016. I myself remain very worried about cyberattacks on election infrastructure in 2020. I am glad that social media platforms seem better prepared to address the issue of foreign sock puppets than they were in 2016, but there is value to correctly identifying the potential impact (or lack thereof) of different sources of threats. Coordinated campaigns by sock puppets on social media are likely neither necessary nor sufficient to signify serious foreign threats to electoral integrity.