Academic Research

As an academic research institute dedicated to studying how social media impacts politics, policy, and democracy, CSMaP publishes peer-reviewed research in top academic journals and produces rigorous data reports on policy relevant topics.

Search or Filter

  • Journal Article

    Election Fraud, YouTube, and Public Perception of the Legitimacy of President Biden

    Journal of Online Trust and Safety, 2022

    View Article View abstract

    Skepticism about the outcome of the 2020 presidential election in the United States led to a historic attack on the Capitol on January 6th, 2021 and represents one of the greatest challenges to America's democratic institutions in over a century. Narratives of fraud and conspiracy theories proliferated over the fall of 2020, finding fertile ground across online social networks, although little is know about the extent and drivers of this spread. In this article, we show that users who were more skeptical of the election's legitimacy were more likely to be recommended content that featured narratives about the legitimacy of the election. Our findings underscore the tension between an "effective" recommendation system that provides users with the content they want, and a dangerous mechanism by which misinformation, disinformation, and conspiracies can find their way to those most likely to believe them.

    Date Posted

    Sep 01, 2022

  • Working Paper

    Echo Chambers, Rabbit Holes, and Algorithmic Bias: How YouTube Recommends Content to Real Users

    Working Paper, May 2022

    View Article View abstract

    To what extent does the YouTube recommendation algorithm push users into echo chambers, ideologically biased content, or rabbit holes? Despite growing popular concern, recent work suggests that the recommendation algorithm is not pushing users into these echo chambers. However, existing research relies heavily on the use of anonymous data collection that does not account for the personalized nature of the recommendation algorithm. We asked a sample of real users to install a browser extension that downloaded the list of videos they were recommended. We instructed these users to start on an assigned video and then click through 20 sets of recommendations, capturing what they were being shown in real time as they used the platform logged into their real accounts. Using a novel method to estimate the ideology of a YouTube video, we demonstrate that the YouTube recommendation algorithm does, in fact, push real users into mild ideological echo chambers where, by the end of the data collection task, liberals and conservatives received different distributions of recommendations from each other, though this difference is small. While we find evidence that this difference increases the longer the user followed the recommendation algorithm, we do not find evidence that many go down `rabbit holes' that lead them to ideologically extreme content. Finally, we find that YouTube pushes all users, regardless of ideology, towards moderately conservative and an increasingly narrow range of ideological content the longer they follow YouTube's recommendations.

    Date Posted

    May 11, 2022

  • Journal Article

    News Credibility Labels Have Limited Average Effects on News Diet Quality and Fail to Reduce Misperceptions

    Science Advances, 2022

    View Article View abstract

    As the primary arena for viral misinformation shifts toward transnational threats, the search continues for scalable countermeasures compatible with principles of transparency and free expression. We conducted a randomized field experiment evaluating the impact of source credibility labels embedded in users’ social feeds and search results pages. By combining representative surveys (n = 3337) and digital trace data (n = 968) from a subset of respondents, we provide a rare ecologically valid test of such an intervention on both attitudes and behavior. On average across the sample, we are unable to detect changes in real-world consumption of news from low-quality sources after 3 weeks. We can also rule out small effects on perceived accuracy of popular misinformation spread about the Black Lives Matter movement and coronavirus disease 2019. However, we present suggestive evidence of a substantively meaningful increase in news diet quality among the heaviest consumers of misinformation. We discuss the implications of our findings for scholars and practitioners.

    Date Posted

    May 06, 2022

  • Journal Article

    Moderating with the Mob: Evaluating the Efficacy of Real-Time Crowdsourced Fact-Checking

    Journal of Online Trust and Safety, 2021

    View Article View abstract

    Reducing the spread of false news remains a challenge for social media platforms, as the current strategy of using third-party fact- checkers lacks the capacity to address both the scale and speed of misinformation diffusion. Research on the “wisdom of the crowds” suggests one possible solution: aggregating the evaluations of ordinary users to assess the veracity of information. In this study, we investigate the effectiveness of a scalable model for real-time crowdsourced fact-checking. We select 135 popular news stories and have them evaluated by both ordinary individuals and professional fact-checkers within 72 hours of publication, producing 12,883 individual evaluations. Although we find that machine learning-based models using the crowd perform better at identifying false news than simple aggregation rules, our results suggest that neither approach is able to perform at the level of professional fact-checkers. Additionally, both methods perform best when using evaluations only from survey respondents with high political knowledge, suggesting reason for caution for crowdsourced models that rely on a representative sample of the population. Overall, our analyses reveal that while crowd-based systems provide some information on news quality, they are nonetheless limited—and have significant variation—in their ability to identify false news.

    Date Posted

    Oct 28, 2021

  • Journal Article

    Twitter Flagged Donald Trump’s Tweets with Election Misinformation: They Continued to Spread Both On and Off the Platform

    Harvard Kennedy School (HKS) Misinformation Review, 2021

    View Article View abstract

    We analyze the spread of Donald Trump’s tweets that were flagged by Twitter using two intervention strategies—attaching a warning label and blocking engagement with the tweet entirely. We find that while blocking engagement on certain tweets limited their diffusion, messages we examined with warning labels spread further on Twitter than those without labels. Additionally, the messages that had been blocked on Twitter remained popular on Facebook, Instagram, and Reddit, being posted more often and garnering more visibility than messages that had either been labeled by Twitter or received no intervention at all. Taken together, our results emphasize the importance of considering content moderation at the ecosystem level.

  • Journal Article

    Cracking Open the News Feed: Exploring What U.S. Facebook Users See and Share with Large-Scale Platform Data

    Journal of Quantitative Description: Digital Media, 2021

    View Article View abstract

    In this study, we analyze for the first time newly available engagement data covering millions of web links shared on Facebook to describe how and by which categories of U.S. users different types of news are seen and shared on the platform. We focus on articles from low-credibility news publishers, credible news sources, purveyors of clickbait, and news specifically about politics, which we identify through a combination of curated lists and supervised classifiers. Our results support recent findings that more fake news is shared by older users and conservatives and that both viewing and sharing patterns suggest a preference for ideologically congenial misinformation. We also find that fake news articles related to politics are more popular among older Americans than other types, while the youngest users share relatively more articles with clickbait headlines. Across the platform, however, articles from credible news sources are shared over 5.5 times more often and viewed over 7.5 times more often than articles from low-credibility sources. These findings offer important context for researchers studying the spread and consumption of information — including misinformation — on social media.

    Date Posted

    Apr 26, 2021

  • Journal Article

    Political Psychology in the Digital (mis)Information age: A Model of News Belief and Sharing

    Social Issues and Policy Review, 2021

    View Article View abstract

    The spread of misinformation, including “fake news,” propaganda, and conspiracy theories, represents a serious threat to society, as it has the potential to alter beliefs, behavior, and policy. Research is beginning to disentangle how and why misinformation is spread and identify processes that contribute to this social problem. We propose an integrative model to understand the social, political, and cognitive psychology risk factors that underlie the spread of misinformation and highlight strategies that might be effective in mitigating this problem. However, the spread of misinformation is a rapidly growing and evolving problem; thus scholars need to identify and test novel solutions, and work with policymakers to evaluate and deploy these solutions. Hence, we provide a roadmap for future research to identify where scholars should invest their energy in order to have the greatest overall impact.

    Date Posted

    Jan 22, 2021

  • Journal Article

    You Won’t Believe Our Results! But They Might: Heterogeneity in Beliefs About the Accuracy of Online Media

    Journal of Experimental Political Science, 2021

    View Article View abstract

    “Clickbait” media has long been espoused as an unfortunate consequence of the rise of digital journalism. But little is known about why readers choose to read clickbait stories. Is it merely curiosity, or might voters think such stories are more likely to provide useful information? We conduct a survey experiment in Italy, where a major political party enthusiastically embraced the esthetics of new media and encouraged their supporters to distrust legacy outlets in favor of online news. We offer respondents a monetary incentive for correct answers to manipulate the relative salience of the motivation for accurate information. This incentive increases differences in the preference for clickbait; older and less educated subjects become even more likely to opt to read a story with a clickbait headline when the incentive to produce a factually correct answer is higher. Our model suggests that a politically relevant subset of the population prefers Clickbait Media because they trust it more.

    Date Posted

    Jan 20, 2021

    Tags

  • Journal Article

    Political Knowledge and Misinformation in the Era of Social Media: Evidence From the 2015 UK Election

    British Journal of Political Science, 2022

    View Article View abstract

    Does social media educate voters, or mislead them? This study measures changes in political knowledge among a panel of voters surveyed during the 2015 UK general election campaign while monitoring the political information to which they were exposed on the Twitter social media platform. The study's panel design permits identification of the effect of information exposure on changes in political knowledge. Twitter use led to higher levels of knowledge about politics and public affairs, as information from news media improved knowledge of politically relevant facts, and messages sent by political parties increased knowledge of party platforms. But in a troubling demonstration of campaigns' ability to manipulate knowledge, messages from the parties also shifted voters' assessments of the economy and immigration in directions favorable to the parties' platforms, leaving some voters with beliefs further from the truth at the end of the campaign than they were at its beginning.

  • Book
  • Journal Article

    Using Social and Behavioral Science to Support COVID-19 Pandemic Response

    • Jay J. Van Bavel
    • Katherine Baicker, 
    • Paulo S. Boggio, 
    • Valerio Capraro, 
    • Aleksandra Cichocka, 
    • Mina Cikara, 
    • Molly J. Crockett, 
    • Alia J. Crum, 
    • Karen M. Douglas, 
    • James N. Druckman, 
    • John Drury, 
    • Oeindrila Dube, 
    • Naomi Ellemers, 
    • Eli J. Finkel, 
    • James H. Fowler, 
    • Michele Gelfand, 
    • Shihui Han, 
    • S. Alexander Haslam, 
    • Jolanda Jetten, 
    • Shinobu Kitayama, 
    • Dean Mobbs, 
    • Lucy E. Napper, 
    • Dominic J. Packer, 
    • Gordon Pennycook, 
    • Ellen Peters, 
    • Richard E. Petty, 
    • David G. Rand, 
    • Stephen D. Reicher, 
    • Simone Schnall, 
    • Azim Shariff, 
    • Linda J. Skitka, 
    • Sandra Susan Smith, 
    • Cass R. Sunstein, 
    • Nassim Tabri, 
    • Joshua A. Tucker
    • Sander van der Linden, 
    • Paul van Lange, 
    • Kim A. Weeden, 
    • Michael J. A. Wohl, 
    • Jamil Zaki, 
    • Sean R. Zion, 
    • Robb Willer

    Nature Human Behavior, 2020

    View Article View abstract

    The COVID-19 pandemic represents a massive global health crisis. Because the crisis requires large-scale behaviour change and places significant psychological burdens on individuals, insights from the social and behavioural sciences can be used to help align human behaviour with the recommendations of epidemiologists and public health experts. Here we discuss evidence from a selection of research topics relevant to pandemics, including work on navigating threats, social and cultural influences on behaviour, science communication, moral decision-making, leadership, and stress and coping. In each section, we note the nature and quality of prior research, including uncertainty and unsettled issues. We identify several insights for effective response to the COVID-19 pandemic and highlight important gaps researchers should move quickly to fill in the coming weeks and months.

    Date Posted

    Apr 30, 2020

    Tags

  • Journal Article

    Less Than You Think: Prevalence and Predictors of Fake News Dissemination on Facebook

    Science Advances, 2019

    View Article View abstract

    So-called “fake news” has renewed concerns about the prevalence and effects of misinformation in political campaigns. Given the potential for widespread dissemination of this material, we examine the individual-level characteristics associated with sharing false articles during the 2016 U.S. presidential campaign. To do so, we uniquely link an original survey with respondents’ sharing activity as recorded in Facebook profile data. First and foremost, we find that sharing this content was a relatively rare activity. Conservatives were more likely to share articles from fake news domains, which in 2016 were largely pro-Trump in orientation, than liberals or moderates. We also find a strong age effect, which persists after controlling for partisanship and ideology: On average, users over 65 shared nearly seven times as many articles from fake news domains as the youngest age group.

    Date Posted

    Jan 09, 2019

  • Working Paper

    Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature

    Hewlett Foundation, 2018

    View Article View abstract

    The following report is intended to provide an overview of the current state of the literature on the relationship between social media; political polarization; and political “disinformation,” a term used to encompass a wide range of types of information about politics found online, including “fake news,” rumors, deliberately factually incorrect information, inadvertently factually incorrect information, politically slanted information, and “hyperpartisan” news. The review of the literature is provided in six separate sections, each of which can be read individually but that cumulatively are intended to provide an overview of what is known—and unknown—about the relationship between social media, political polarization, and disinformation. The report concludes by identifying key gaps in our understanding of these phenomena and the data that are needed to address them.

    Date Posted

    Mar 19, 2018

  • Data Report