Academic Research

CSMaP faculty, postdoctoral fellows, and students publish rigorous, peer-reviewed research in top academic journals and post working papers sharing ongoing work.

Search or Filter

  • Journal Article

    The Diffusion and Reach of (Mis)Information on Facebook During the U.S. 2020 Election

    Sociological Science, 2024

    View Article View abstract

    Social media creates the possibility for rapid, viral spread of content, but how many posts actually reach millions? And is misinformation special in how it propagates? We answer these questions by analyzing the virality of and exposure to information on Facebook during the U.S. 2020 presidential election. We examine the diffusion trees of the approximately 1 B posts that were re-shared at least once by U.S.-based adults from July 1, 2020, to February 1, 2021. We differentiate misinformation from non-misinformation posts to show that (1) misinformation diffused more slowly, relying on a small number of active users that spread misinformation via long chains of peer-to-peer diffusion that reached millions; non-misinformation spread primarily through one-to-many affordances (mainly, Pages); (2) the relative importance of peer-to-peer spread for misinformation was likely due to an enforcement gap in content moderation policies designed to target mostly Pages and Groups; and (3) periods of aggressive content moderation proximate to the election coincide with dramatic drops in the spread and reach of misinformation and (to a lesser extent) political content.

  • Journal Article

    Measuring Receptivity to Misinformation at Scale on a Social Media Platform

    PNAS Nexus, 2024

    View Article View abstract

    Measuring the impact of online misinformation is challenging. Traditional measures, such as user views or shares on social media, are incomplete because not everyone who is exposed to misinformation is equally likely to believe it. To address this issue, we developed a method that combines survey data with observational Twitter data to probabilistically estimate the number of users both exposed to and likely to believe a specific news story. As a proof of concept, we applied this method to 139 viral news articles and find that although false news reaches an audience with diverse political views, users who are both exposed and receptive to believing false news tend to have more extreme ideologies. These receptive users are also more likely to encounter misinformation earlier than those who are unlikely to believe it. This mismatch between overall user exposure and receptive user exposure underscores the limitation of relying solely on exposure or interaction data to measure the impact of misinformation, as well as the challenge of implementing effective interventions. To demonstrate how our approach can address this challenge, we then conducted data-driven simulations of common interventions used by social media platforms. We find that these interventions are only modestly effective at reducing exposure among users likely to believe misinformation, and their effectiveness quickly diminishes unless implemented soon after misinformation’s initial spread. Our paper provides a more precise estimate of misinformation’s impact by focusing on the exposure of users likely to believe it, offering insights for effective mitigation strategies on social media.

  • Journal Article

    The Effects of Facebook and Instagram on the 2020 Election: A Deactivation Experiment

    • Hunt Alcott, 
    • Matthew Gentzkow, 
    • Winter Mason, 
    • Arjun Wilkins, 
    • Pablo Barberá
    • Taylor Brown, 
    • Juan Carlos Cisneros, 
    • Adriana Crespo-Tenorio, 
    • Drew Dimmery, 
    • Deen Freelon, 
    • Sandra González-Bailón
    • Andrew M. Guess
    • Young Mie Kim, 
    • David Lazer, 
    • Neil Malhotra, 
    • Devra Moehler, 
    • Sameer Nair-Desai, 
    • Houda Nait El Barj, 
    • Brendan Nyhan, 
    • Ana Carolina Paixao de Queiroz, 
    • Jennifer Pan, 
    • Jaime Settle, 
    • Emily Thorson, 
    • Rebekah Tromble, 
    • Carlos Velasco Rivera, 
    • Benjamin Wittenbrink, 
    • Magdalena Wojcieszak
    • Saam Zahedian, 
    • Annie Franco, 
    • Chad Kiewiet De Jong, 
    • Natalie Jomini Stroud, 
    • Joshua A. Tucker

    Proceedings of the National Academy of Sciences, 2024

    View Article View abstract

    We study the effect of Facebook and Instagram access on political beliefs, attitudes, and behavior by randomizing a subset of 19,857 Facebook users and 15,585 Instagram users to deactivate their accounts for 6 wk before the 2020 U.S. election. We report four key findings. First, both Facebook and Instagram deactivation reduced an index of political participation (driven mainly by reduced participation online). Second, Facebook deactivation had no significant effect on an index of knowledge, but secondary analyses suggest that it reduced knowledge of general news while possibly also decreasing belief in misinformation circulating online. Third, Facebook deactivation may have reduced self-reported net votes for Trump, though this effect does not meet our preregistered significance threshold. Finally, the effects of both Facebook and Instagram deactivation on affective and issue polarization, perceived legitimacy of the election, candidate favorability, and voter turnout were all precisely estimated and close to zero.

  • Book

    Online Data and the Insurrection

    Media and January 6th, 2024

    View Book View abstract

    Online data is key to understanding the leadup to the January 6 insurrection, including how and why election fraud conspiracies spread online, how conspiracy groups organized online to participate in the insurrection, and other factors of online life that led to the insurrection. However, there are significant challenges in accessing data for this research. First, platforms restrict which researchers get access to data, as well as what researchers can do with the data they access. Second, this data is ephemeral; that is, once users or the platform remove the data, researchers can no longer access it. These factors affect what research questions can ever be asked and answered.

  • Journal Article

    Like-Minded Sources On Facebook Are Prevalent But Not Polarizing

    • Brendan Nyhan, 
    • Jaime Settle, 
    • Emily Thorson, 
    • Magdalena Wojcieszak
    • Pablo Barberá
    • Annie Y. Chen, 
    • Hunt Alcott, 
    • Taylor Brown, 
    • Adriana Crespo-Tenorio, 
    • Drew Dimmery, 
    • Deen Freelon, 
    • Matthew Gentzkow, 
    • Sandra González-Bailón
    • Andrew M. Guess
    • Edward Kennedy, 
    • Young Mie Kim, 
    • David Lazer, 
    • Neil Malhotra, 
    • Devra Moehler, 
    • Jennifer Pan, 
    • Daniel Robert Thomas, 
    • Rebekah Tromble, 
    • Carlos Velasco Rivera, 
    • Arjun Wilkins, 
    • Beixian Xiong, 
    • Chad Kiewiet De Jong, 
    • Annie Franco, 
    • Winter Mason, 
    • Natalie Jomini Stroud, 
    • Joshua A. Tucker

    Nature, 2023

    View Article View abstract

    Many critics raise concerns about the prevalence of ‘echo chambers’ on social media and their potential role in increasing political polarization. However, the lack of available data and the challenges of conducting large-scale field experiments have made it difficult to assess the scope of the problem1,2. Here we present data from 2020 for the entire population of active adult Facebook users in the USA showing that content from ‘like-minded’ sources constitutes the majority of what people see on the platform, although political information and news represent only a small fraction of these exposures. To evaluate a potential response to concerns about the effects of echo chambers, we conducted a multi-wave field experiment on Facebook among 23,377 users for whom we reduced exposure to content from like-minded sources during the 2020 US presidential election by about one-third. We found that the intervention increased their exposure to content from cross-cutting sources and decreased exposure to uncivil language, but had no measurable effects on eight preregistered attitudinal measures such as affective polarization, ideological extremity, candidate evaluations and belief in false claims. These precisely estimated results suggest that although exposure to content from like-minded sources on social media is common, reducing its prevalence during the 2020 US presidential election did not correspondingly reduce polarization in beliefs or attitudes.

  • Journal Article

    How Do Social Media Feed Algorithms Affect Attitudes and Behavior in an Election Campaign?

    • Andrew M. Guess
    • Neil Malhotra, 
    • Jennifer Pan, 
    • Pablo Barberá
    • Hunt Alcott, 
    • Taylor Brown, 
    • Adriana Crespo-Tenorio, 
    • Drew Dimmery, 
    • Deen Freelon, 
    • Matthew Gentzkow, 
    • Sandra González-Bailón
    • Edward Kennedy, 
    • Young Mie Kim, 
    • David Lazer, 
    • Devra Moehler, 
    • Brendan Nyhan, 
    • Jaime Settle, 
    • Calos Velasco-Rivera, 
    • Daniel Robert Thomas, 
    • Emily Thorson, 
    • Rebekah Tromble, 
    • Beixian Xiong, 
    • Chad Kiewiet De Jong, 
    • Annie Franco, 
    • Winter Mason, 
    • Natalie Jomini Stroud, 
    • Joshua A. Tucker

    Science, 2023

    View Article View abstract

    We investigated the effects of Facebook’s and Instagram’s feed algorithms during the 2020 US election. We assigned a sample of consenting users to reverse-chronologically-ordered feeds instead of the default algorithms. Moving users out of algorithmic feeds substantially decreased the time they spent on the platforms and their activity. The chronological feed also affected exposure to content: The amount of political and untrustworthy content they saw increased on both platforms, the amount of content classified as uncivil or containing slur words they saw decreased on Facebook, and the amount of content from moderate friends and sources with ideologically mixed audiences they saw increased on Facebook. Despite these substantial changes in users’ on-platform experience, the chronological feed did not significantly alter levels of issue polarization, affective polarization, political knowledge, or other key attitudes during the 3-month study period.

  • Journal Article

    Reshares on Social Media Amplify Political News But Do Not Detectably Affect Beliefs or Opinions

    • Andrew M. Guess
    • Neil Malhotra, 
    • Jennifer Pan, 
    • Pablo Barberá
    • Hunt Alcott, 
    • Taylor Brown, 
    • Adriana Crespo-Tenorio, 
    • Drew Dimmery, 
    • Deen Freelon, 
    • Matthew Gentzkow, 
    • Sandra González-Bailón
    • Edward Kennedy, 
    • Young Mie Kim, 
    • David Lazer, 
    • Devra Moehler, 
    • Brendan Nyhan, 
    • Carlos Velasco Rivera, 
    • Jaime Settle, 
    • Daniel Robert Thomas, 
    • Emily Thorson, 
    • Rebekah Tromble, 
    • Arjun Wilkins, 
    • Magdalena Wojcieszak
    • Beixian Xiong, 
    • Chad Kiewiet De Jong, 
    • Annie Franco, 
    • Winter Mason, 
    • Natalie Jomini Stroud, 
    • Joshua A. Tucker

    Science, 2023

    View Article View abstract

    We studied the effects of exposure to reshared content on Facebook during the 2020 US election by assigning a random set of consenting, US-based users to feeds that did not contain any reshares over a 3-month period. We find that removing reshared content substantially decreases the amount of political news, including content from untrustworthy sources, to which users are exposed; decreases overall clicks and reactions; and reduces partisan news clicks. Further, we observe that removing reshared content produces clear decreases in news knowledge within the sample, although there is some uncertainty about how this would generalize to all users. Contrary to expectations, the treatment does not significantly affect political polarization or any measure of individual-level political attitudes.

  • Journal Article

    Asymmetric Ideological Segregation In Exposure To Political News on Facebook

    • Sandra González-Bailón
    • David Lazer, 
    • Pablo Barberá
    • Meiqing Zhang, 
    • Hunt Alcott, 
    • Taylor Brown, 
    • Adriana Crespo-Tenorio, 
    • Deen Freelon, 
    • Matthew Gentzkow, 
    • Andrew M. Guess
    • Shanto Iyengar, 
    • Young Mie Kim, 
    • Neil Malhotra, 
    • Devra Moehler, 
    • Brendan Nyhan, 
    • Jennifer Pan, 
    • Caros Velasco Rivera, 
    • Jaime Settle, 
    • Emily Thorson, 
    • Rebekah Tromble, 
    • Arjun Wilkins, 
    • Magdalena Wojcieszak
    • Chad Kiewiet De Jong, 
    • Annie Franco, 
    • Winter Mason, 
    • Joshua A. Tucker
    • Natalie Jomini Stroud

    Science, 2023

    View Article View abstract

    Does Facebook enable ideological segregation in political news consumption? We analyzed exposure to news during the US 2020 election using aggregated data for 208 million US Facebook users. We compared the inventory of all political news that users could have seen in their feeds with the information that they saw (after algorithmic curation) and the information with which they engaged. We show that (i) ideological segregation is high and increases as we shift from potential exposure to actual exposure to engagement; (ii) there is an asymmetry between conservative and liberal audiences, with a substantial corner of the news ecosystem consumed exclusively by conservatives; and (iii) most misinformation, as identified by Meta’s Third-Party Fact-Checking Program, exists within this homogeneously conservative corner, which has no equivalent on the liberal side. Sources favored by conservative audiences were more prevalent on Facebook’s news ecosystem than those favored by liberals.

  • Journal Article

    Election Fraud, YouTube, and Public Perception of the Legitimacy of President Biden

    Journal of Online Trust and Safety, 2022

    View Article View abstract

    Skepticism about the outcome of the 2020 presidential election in the United States led to a historic attack on the Capitol on January 6th, 2021 and represents one of the greatest challenges to America's democratic institutions in over a century. Narratives of fraud and conspiracy theories proliferated over the fall of 2020, finding fertile ground across online social networks, although little is know about the extent and drivers of this spread. In this article, we show that users who were more skeptical of the election's legitimacy were more likely to be recommended content that featured narratives about the legitimacy of the election. Our findings underscore the tension between an "effective" recommendation system that provides users with the content they want, and a dangerous mechanism by which misinformation, disinformation, and conspiracies can find their way to those most likely to believe them.

    Date Posted

    Sep 01, 2022

  • Working Paper

    To Moderate, Or Not to Moderate: Strategic Domain Sharing by Congressional Campaigns

    Working Paper, April 2022

    View Article View abstract

    We test whether candidates move to the extremes before a primary but then return to the center for the general election to appeal to the different preferences of each electorate. Incumbents are now more vulnerable to primary challenges than ever as social media offers a viable pathway for fundraising and messaging to challengers, while homogeneity of districts has reduced general election competitiveness. To assess candidates' ideological trajectories, we estimate the revealed ideology of 2020 congressional candidates (incumbents, their primary challengers, and open seat candidates) before and after their primaries, using a homophily-based measure of domains shared on Twitter. This method provides temporally granular data to observe changes in communication within a single election campaign cycle. We find that incumbents did move towards extremes for their primaries and back towards the center for the general election, but only when threatened by a well-funded primary challenge, though non-incumbents did not.

    Date Posted

    Apr 05, 2022

  • Journal Article

    Twitter Flagged Donald Trump’s Tweets with Election Misinformation: They Continued to Spread Both On and Off the Platform

    Harvard Kennedy School (HKS) Misinformation Review, 2021

    View Article View abstract

    We analyze the spread of Donald Trump’s tweets that were flagged by Twitter using two intervention strategies—attaching a warning label and blocking engagement with the tweet entirely. We find that while blocking engagement on certain tweets limited their diffusion, messages we examined with warning labels spread further on Twitter than those without labels. Additionally, the messages that had been blocked on Twitter remained popular on Facebook, Instagram, and Reddit, being posted more often and garnering more visibility than messages that had either been labeled by Twitter or received no intervention at all. Taken together, our results emphasize the importance of considering content moderation at the ecosystem level.

  • Book