Election Fraud, YouTube, and Public Perception of the Legitimacy of President Biden

We find that those most skeptical of the legitimacy of the 2020 election were recommended three times as many election-fraud related videos as were the least skeptical participants.

Abstract

Skepticism about the outcome of the 2020 presidential election in the United States led to a historic attack on the Capitol on January 6th, 2021 and represents one of the greatest challenges to America's democratic institutions in over a century. Narratives of fraud and conspiracy theories proliferated over the fall of 2020, finding fertile ground across online social networks, although little is know about the extent and drivers of this spread. In this article, we show that users who were more skeptical of the election's legitimacy were more likely to be recommended content that featured narratives about the legitimacy of the election. Our findings underscore the tension between an "effective" recommendation system that provides users with the content they want, and a dangerous mechanism by which misinformation, disinformation, and conspiracies can find their way to those most likely to believe them.

Background

Narratives of fraud and conspiracy theories proliferated around the 2020 U.S. presidential election and ultimately contributed to the historic January 6th attack on the Capitol. There are myriad explanations for why this occurred, ranging from increasing mass polarization to declining cross-cutting identities to the pernicious consequences of conspiracy theories and misinformation online. Of particular concern among the popular press is the role played by online recommendation algorithms — which are thought to contribute to echo chambers, filter bubbles, and radicalization — yet there is little evidence to support this claim in scholarly work.

Study

To explore this question, we sampled more than 300 Americans with YouTube accounts in November and December of 2020. The subjects were asked how concerned they were with a number of aspects of election fraud and then asked to install a browser extension that would record the list of recommendations they were shown. The subjects were instructed to click on a randomly assigned YouTube video, and then to click on one of the recommendations they were shown according to a randomly assigned “traversal rule.” By restricting user behavior in this way, we were able to isolate the recommendation algorithm’s influence on what real users were being suggested in real time.

Results

After analyzing data on all recommended videos, we found that those most skeptical of the election’s legitimacy were shown three times as many election fraud-related videos as were the least skeptical — roughly 8 additional recommendations out of approximately 400 videos suggested to each study participant. While the overall prevalence of these types of videos was low, the findings uncover the detrimental consequences of a recommendation system that provides users with the content they want and cast doubt on the view that online information environments are solely determined by user choice.