Academic Research

CSMaP faculty, postdoctoral fellows, and students publish rigorous, peer-reviewed research in top academic journals and post working papers sharing ongoing work.

Search or Filter

  • Book

    Online Data and the Insurrection

    Media and January 6th, 2024

    View Book View abstract

    Online data is key to understanding the leadup to the January 6 insurrection, including how and why election fraud conspiracies spread online, how conspiracy groups organized online to participate in the insurrection, and other factors of online life that led to the insurrection. However, there are significant challenges in accessing data for this research. First, platforms restrict which researchers get access to data, as well as what researchers can do with the data they access. Second, this data is ephemeral; that is, once users or the platform remove the data, researchers can no longer access it. These factors affect what research questions can ever be asked and answered.

  • Journal Article

    Online Searches to Evaluate Misinformation Can Increase its Perceived Veracity

    Nature, 2024

    View Article View abstract

    Considerable scholarly attention has been paid to understanding belief in online misinformation, with a particular focus on social networks. However, the dominant role of search engines in the information environment remains underexplored, even though the use of online search to evaluate the veracity of information is a central component of media literacy interventions. Although conventional wisdom suggests that searching online when evaluating misinformation would reduce belief in it, there is little empirical evidence to evaluate this claim. Here, across five experiments, we present consistent evidence that online search to evaluate the truthfulness of false news articles actually increases the probability of believing them. To shed light on this relationship, we combine survey data with digital trace data collected using a custom browser extension. We find that the search effect is concentrated among individuals for whom search engines return lower-quality information. Our results indicate that those who search online to evaluate misinformation risk falling into data voids, or informational spaces in which there is corroborating evidence from low-quality sources. We also find consistent evidence that searching online to evaluate news increases belief in true news from low-quality sources, but inconsistent evidence that it increases belief in true news from mainstream sources. Our findings highlight the need for media literacy programmes to ground their recommendations in empirically tested strategies and for search engines to invest in solutions to the challenges identified here.

    Date Posted

    Dec 20, 2023

  • Journal Article

    A Synthesis of Evidence for Policy from Behavioural Science During COVID-19

    • Kai Ruggeri, 
    • Friederike Stock, 
    • S. Alexander Haslam, 
    • Valerio Capraro, 
    • Paulo Boggio, 
    • Naomi Ellemers, 
    • Aleksandra Cichocka, 
    • Karen M. Douglas, 
    • David G. Rand, 
    • Sander van der Linden, 
    • Mina Cikara, 
    • Eli J. Finkel, 
    • James N. Druckman, 
    • Michael J. A. Wohl, 
    • Richard E. Petty, 
    • Joshua A. Tucker
    • Azim Shariff, 
    • Michele Gelfand, 
    • Dominic Packer, 
    • Jolanda Jetten, 
    • Paul A. M. Van Lange, 
    • Gordon Pennycook, 
    • Ellen Peters, 
    • Katherine Baicker, 
    • Alia Crum, 
    • Kim A. Weeden, 
    • Lucy Napper, 
    • Nassim Tabri, 
    • Jamil Zaki, 
    • Linda Skitka, 
    • Shinobu Kitayama, 
    • Dean Mobbs, 
    • Cass R. Sunstein, 
    • Sarah Ashcroft-Jones, 
    • Anna Louise Todsen, 
    • Ali Hajian, 
    • Sanne Verra, 
    • Vanessa Buehler, 
    • Maja Friedemann, 
    • Marlene Hecht, 
    • Rayyan S. Mobarak, 
    • Ralitsa Karakasheva, 
    • Markus R. Tünte, 
    • Siu Kit Yeung, 
    • R. Shayna Rosenbaum, 
    • Žan Lep, 
    • Yuki Yamada, 
    • Sa-kiera Tiarra Jolynn Hudson, 
    • Lucía Macchia, 
    • Irina Soboleva, 
    • Eugen Dimant, 
    • Sandra J. Geiger, 
    • Hannes Jarke, 
    • Tobias Wingen, 
    • Jana Berkessel, 
    • Silvana Mareva, 
    • Lucy McGill, 
    • Francesca Papa, 
    • Bojana Većkalov, 
    • Zeina Afif, 
    • Eike K. Buabang, 
    • Marna Landman, 
    • Felice Tavera, 
    • Jack L. Andrews, 
    • Aslı Bursalıoğlu, 
    • Zorana Zupan, 
    • Lisa Wagner, 
    • Joaquin Navajas, 
    • Marek Vranka, 
    • David Kasdan, 
    • Patricia Chen, 
    • Kathleen R. Hudson, 
    • Lindsay M. Novak, 
    • Paul Teas, 
    • Nikolay R. Rachev, 
    • Matteo M. Galizzi, 
    • Katherine L. Milkman, 
    • Marija Petrović, 
    • Jay J. Van Bavel
    • Robb Willer

    Nature, 2023

    View Article View abstract

    Scientific evidence regularly guides policy decisions, with behavioural science increasingly part of this process. In April 2020, an influential paper proposed 19 policy recommendations (‘claims’) detailing how evidence from behavioural science could contribute to efforts to reduce impacts and end the COVID-19 pandemic. Here we assess 747 pandemic-related research articles that empirically investigated those claims. We report the scale of evidence and whether evidence supports them to indicate applicability for policymaking. Two independent teams, involving 72 reviewers, found evidence for 18 of 19 claims, with both teams finding evidence supporting 16 (89%) of those 18 claims. The strongest evidence supported claims that anticipated culture, polarization and misinformation would be associated with policy effectiveness. Claims suggesting trusted leaders and positive social norms increased adherence to behavioural interventions also had strong empirical support, as did appealing to social consensus or bipartisan agreement. Targeted language in messaging yielded mixed effects and there were no effects for highlighting individual benefits or protecting others. No available evidence existed to assess any distinct differences in effects between using the terms ‘physical distancing’ and ‘social distancing’. Analysis of 463 papers containing data showed generally large samples; 418 involved human participants with a mean of 16,848 (median of 1,699). That statistical power underscored improved suitability of behavioural science research for informing policy decisions. Furthermore, by implementing a standardized approach to evidence selection and synthesis, we amplify broader implications for advancing scientific evidence in policy formulation and prioritization.

    Date Posted

    Dec 13, 2023

    Tags

  • Journal Article

    Testing the Effect of Information on Discerning the Veracity of News in Real Time

    Journal of Experimental Political Science, 2023

    View Article View abstract

    Despite broad adoption of digital media literacy interventions that provide online users with more information when consuming news, relatively little is known about the effect of this additional information on the discernment of news veracity in real time. Gaining a comprehensive understanding of how information impacts discernment of news veracity has been hindered by challenges of external and ecological validity. Using a series of pre-registered experiments, we measure this effect in real time. Access to the full article relative to solely the headline/lede and access to source information improves an individual's ability to correctly discern the veracity of news. We also find that encouraging individuals to search online increases belief in both false/misleading and true news. Taken together, we provide a generalizable method for measuring the effect of information on news discernment, as well as crucial evidence for practitioners developing strategies for improving the public's digital media literacy.

    Date Posted

    Nov 08, 2023

    Tags

  • Journal Article

    Like-Minded Sources On Facebook Are Prevalent But Not Polarizing

    • Brendan Nyhan, 
    • Jaime Settle, 
    • Emily Thorson, 
    • Magdalena Wojcieszak
    • Pablo Barberá
    • Annie Y. Chen, 
    • Hunt Alcott, 
    • Taylor Brown, 
    • Adriana Crespo-Tenorio, 
    • Drew Dimmery, 
    • Deen Freelon, 
    • Matthew Gentzkow, 
    • Sandra González-Bailón
    • Andrew M. Guess
    • Edward Kennedy, 
    • Young Mie Kim, 
    • David Lazer, 
    • Neil Malhotra, 
    • Devra Moehler, 
    • Jennifer Pan, 
    • Daniel Robert Thomas, 
    • Rebekah Tromble, 
    • Carlos Velasco Rivera, 
    • Arjun Wilkins, 
    • Beixian Xiong, 
    • Chad Kiewiet De Jong, 
    • Annie Franco, 
    • Winter Mason, 
    • Natalie Jomini Stroud, 
    • Joshua A. Tucker

    Nature, 2023

    View Article View abstract

    Many critics raise concerns about the prevalence of ‘echo chambers’ on social media and their potential role in increasing political polarization. However, the lack of available data and the challenges of conducting large-scale field experiments have made it difficult to assess the scope of the problem1,2. Here we present data from 2020 for the entire population of active adult Facebook users in the USA showing that content from ‘like-minded’ sources constitutes the majority of what people see on the platform, although political information and news represent only a small fraction of these exposures. To evaluate a potential response to concerns about the effects of echo chambers, we conducted a multi-wave field experiment on Facebook among 23,377 users for whom we reduced exposure to content from like-minded sources during the 2020 US presidential election by about one-third. We found that the intervention increased their exposure to content from cross-cutting sources and decreased exposure to uncivil language, but had no measurable effects on eight preregistered attitudinal measures such as affective polarization, ideological extremity, candidate evaluations and belief in false claims. These precisely estimated results suggest that although exposure to content from like-minded sources on social media is common, reducing its prevalence during the 2020 US presidential election did not correspondingly reduce polarization in beliefs or attitudes.

  • Journal Article

    How Do Social Media Feed Algorithms Affect Attitudes and Behavior in an Election Campaign?

    • Andrew M. Guess
    • Neil Malhotra, 
    • Jennifer Pan, 
    • Pablo Barberá
    • Hunt Alcott, 
    • Taylor Brown, 
    • Adriana Crespo-Tenorio, 
    • Drew Dimmery, 
    • Deen Freelon, 
    • Matthew Gentzkow, 
    • Sandra González-Bailón
    • Edward Kennedy, 
    • Young Mie Kim, 
    • David Lazer, 
    • Devra Moehler, 
    • Brendan Nyhan, 
    • Jaime Settle, 
    • Calos Velasco-Rivera, 
    • Daniel Robert Thomas, 
    • Emily Thorson, 
    • Rebekah Tromble, 
    • Beixian Xiong, 
    • Chad Kiewiet De Jong, 
    • Annie Franco, 
    • Winter Mason, 
    • Natalie Jomini Stroud, 
    • Joshua A. Tucker

    Science, 2023

    View Article View abstract

    We investigated the effects of Facebook’s and Instagram’s feed algorithms during the 2020 US election. We assigned a sample of consenting users to reverse-chronologically-ordered feeds instead of the default algorithms. Moving users out of algorithmic feeds substantially decreased the time they spent on the platforms and their activity. The chronological feed also affected exposure to content: The amount of political and untrustworthy content they saw increased on both platforms, the amount of content classified as uncivil or containing slur words they saw decreased on Facebook, and the amount of content from moderate friends and sources with ideologically mixed audiences they saw increased on Facebook. Despite these substantial changes in users’ on-platform experience, the chronological feed did not significantly alter levels of issue polarization, affective polarization, political knowledge, or other key attitudes during the 3-month study period.

  • Journal Article

    Reshares on Social Media Amplify Political News But Do Not Detectably Affect Beliefs or Opinions

    • Andrew M. Guess
    • Neil Malhotra, 
    • Jennifer Pan, 
    • Pablo Barberá
    • Hunt Alcott, 
    • Taylor Brown, 
    • Adriana Crespo-Tenorio, 
    • Drew Dimmery, 
    • Deen Freelon, 
    • Matthew Gentzkow, 
    • Sandra González-Bailón
    • Edward Kennedy, 
    • Young Mie Kim, 
    • David Lazer, 
    • Devra Moehler, 
    • Brendan Nyhan, 
    • Carlos Velasco Rivera, 
    • Jaime Settle, 
    • Daniel Robert Thomas, 
    • Emily Thorson, 
    • Rebekah Tromble, 
    • Arjun Wilkins, 
    • Magdalena Wojcieszak
    • Beixian Xiong, 
    • Chad Kiewiet De Jong, 
    • Annie Franco, 
    • Winter Mason, 
    • Natalie Jomini Stroud, 
    • Joshua A. Tucker

    Science, 2023

    View Article View abstract

    We studied the effects of exposure to reshared content on Facebook during the 2020 US election by assigning a random set of consenting, US-based users to feeds that did not contain any reshares over a 3-month period. We find that removing reshared content substantially decreases the amount of political news, including content from untrustworthy sources, to which users are exposed; decreases overall clicks and reactions; and reduces partisan news clicks. Further, we observe that removing reshared content produces clear decreases in news knowledge within the sample, although there is some uncertainty about how this would generalize to all users. Contrary to expectations, the treatment does not significantly affect political polarization or any measure of individual-level political attitudes.

  • Journal Article

    Asymmetric Ideological Segregation In Exposure To Political News on Facebook

    • Sandra González-Bailón
    • David Lazer, 
    • Pablo Barberá
    • Meiqing Zhang, 
    • Hunt Alcott, 
    • Taylor Brown, 
    • Adriana Crespo-Tenorio, 
    • Deen Freelon, 
    • Matthew Gentzkow, 
    • Andrew M. Guess
    • Shanto Iyengar, 
    • Young Mie Kim, 
    • Neil Malhotra, 
    • Devra Moehler, 
    • Brendan Nyhan, 
    • Jennifer Pan, 
    • Caros Velasco Rivera, 
    • Jaime Settle, 
    • Emily Thorson, 
    • Rebekah Tromble, 
    • Arjun Wilkins, 
    • Magdalena Wojcieszak
    • Chad Kiewiet De Jong, 
    • Annie Franco, 
    • Winter Mason, 
    • Joshua A. Tucker
    • Natalie Jomini Stroud

    Science, 2023

    View Article View abstract

    Does Facebook enable ideological segregation in political news consumption? We analyzed exposure to news during the US 2020 election using aggregated data for 208 million US Facebook users. We compared the inventory of all political news that users could have seen in their feeds with the information that they saw (after algorithmic curation) and the information with which they engaged. We show that (i) ideological segregation is high and increases as we shift from potential exposure to actual exposure to engagement; (ii) there is an asymmetry between conservative and liberal audiences, with a substantial corner of the news ecosystem consumed exclusively by conservatives; and (iii) most misinformation, as identified by Meta’s Third-Party Fact-Checking Program, exists within this homogeneously conservative corner, which has no equivalent on the liberal side. Sources favored by conservative audiences were more prevalent on Facebook’s news ecosystem than those favored by liberals.

  • Working Paper

    WhatsApp Increases Exposure to False Rumors but has Limited Effects on Beliefs and Polarization: Evidence from a Multimedia-Constrained Deactivation.

    Working Paper, May 2023

    View Article View abstract

    For years WhatsApp has been the primary social media application in many countries of the Global South. Numerous journalistic and scholarly accounts suggest that the platform has become a fertile ground for spreading misinformation and partisan content, with some going so far as to assert that WhatsApp could seriously impact electoral outcomes, episodes of violence, and vaccine hesitancy around the world. However, no studies so far have been able to show causal links between WhatsApp usage and these alleged changes in citizens' attitudes and behaviors. To fill this gap, we conducted a field experiment that reduced users' WhatsApp activity during weeks ahead of the most recent Brazilian Presidential election. Our field experiment randomly assigns users to a multimedia deactivation, in which participants turn off their automatic download of any multimedia - image, video, or audio - on WhatsApp and are incentivized not to access any multimedia content during the weeks leading up to the election on October 2, 2022. We find that the deactivation significantly reduced subjects’ exposure to false rumors that circulated widely during the weeks before the election. However, consistent with the minimal-effects tradition, the direct consequences of reducing exposure to misinformation on WhatsApp in the weeks before the election are limited and do not lead to significant changes in belief accuracy and political polarization. Our study expands the growing literature on the causal effects of reducing social media usage on political attitudes by focusing on the role of exposure to misinformation in the Global South.

  • Book

    Computational Social Science for Policy and Quality of Democracy: Public Opinion, Hate Speech, Misinformation, and Foreign Influence Campaigns

    Handbook of Computational Social Science for Policy, 2023

    View Book View abstract

    The intersection of social media and politics is yet another realm in which Computational Social Science has a paramount role to play. In this review, I examine the questions that computational social scientists are attempting to answer – as well as the tools and methods they are developing to do so – in three areas where the rise of social media has led to concerns about the quality of democracy in the digital information era: online hate; misinformation; and foreign influence campaigns. I begin, however, by considering a precursor of these topics – and also a potential hope for social media to be able to positively impact the quality of democracy – by exploring attempts to measure public opinion online using Computational Social Science methods. In all four areas, computational social scientists have made great strides in providing information to policy makers and the public regarding the evolution of these very complex phenomena but in all cases could do more to inform public policy with better access to the necessary data; this point is discussed in more detail in the conclusion of the review.

  • Working Paper

    Social Media, Information, and Politics: Insights on Latinos in the U.S.

    Working Paper, November 2022

    View Article View abstract

    Social media is used by millions of Americans to acquire political news and information. Most of this research has focused on understanding the way social media consumption affects the political behavior and preferences of White Americans. Much less is known about Latinos’ political activity on social media, who are not only the largest racial/ethnic minority group in the U.S., but they also continue to exhibit diverse political preferences. Moreover, about 30% of Latinos rely primarily on Spanish-language news sources (Spanish-dominant Latinos) and another 30% are bilingual. Given that Spanish-language social media is not as heavily monitored for misinformation than its English-language counterparts (Valencia, 2021; Paul, 2021), Spanish-dominant Latinos who rely on social media for news may be more susceptible to political misinformation than those Latinos who are exposed to English-language social media. We address this contention by fielding an original study that sampled a large number of Latino and White respondents. Consistent with our expectations, Latinos who rely on Spanish-language social media are more likely to believe in election fraud than those who use both English and Spanish social media new sources. We also find that Latinos engage in more political activities on social media when compared to White Americans, particularly on their social media of choice, WhatsApp.

  • Journal Article

    Election Fraud, YouTube, and Public Perception of the Legitimacy of President Biden

    Journal of Online Trust and Safety, 2022

    View Article View abstract

    Skepticism about the outcome of the 2020 presidential election in the United States led to a historic attack on the Capitol on January 6th, 2021 and represents one of the greatest challenges to America's democratic institutions in over a century. Narratives of fraud and conspiracy theories proliferated over the fall of 2020, finding fertile ground across online social networks, although little is know about the extent and drivers of this spread. In this article, we show that users who were more skeptical of the election's legitimacy were more likely to be recommended content that featured narratives about the legitimacy of the election. Our findings underscore the tension between an "effective" recommendation system that provides users with the content they want, and a dangerous mechanism by which misinformation, disinformation, and conspiracies can find their way to those most likely to believe them.

    Date Posted

    Sep 01, 2022

  • Journal Article

    What We Learned About The Gateway Pundit from its Own Web Traffic Data

    Workshop Proceedings of the 16th International AAAI Conference on Web and Social Media, 2022

    View Article View abstract

    To mitigate the spread of false news, researchers need to understand who visits low-quality news sites, what brings people to those sites, and what content they prefer to consume. Due to challenges in observing most direct website traffic, existing research primarily relies on alternative data sources, such as engagement signals from social media posts. However, such signals are at best only proxies for actual website visits. During an audit of far-right news websites, we discovered that The Gateway Pundit (TGP) has made its web traffic data publicly available, giving us a rare opportunity to understand what news pages people actually visit. We collected 68 million web traffic visits to the site over a one-month period and analyzed how people consume news via multiple features. Our referral analysis shows that search engines and social media platforms are the main drivers of traffic; our geo-location analysis reveals that TGP is more popular in counties where more people voted for Trump in 2020. In terms of content, topics related to 2020 US presidential election and 2021 US capital riot have the highest average number of visits. We also use these data to quantify to what degree social media engagement signals correlate with actual web visit counts. To do so, we collect Facebook and Twitter posts with URLs from TGP during the same time period. We show that all engagement signals positively correlate with web visit counts, but with varying correlation strengths. For example, total interaction on Facebook correlates better than Twitter retweet count. Our insights can also help researchers choose the right metrics when they measure the impact of news URLs on social media.

    Date Posted

    Jun 01, 2022

  • Working Paper

    Echo Chambers, Rabbit Holes, and Algorithmic Bias: How YouTube Recommends Content to Real Users

    Working Paper, May 2022

    View Article View abstract

    To what extent does the YouTube recommendation algorithm push users into echo chambers, ideologically biased content, or rabbit holes? Despite growing popular concern, recent work suggests that the recommendation algorithm is not pushing users into these echo chambers. However, existing research relies heavily on the use of anonymous data collection that does not account for the personalized nature of the recommendation algorithm. We asked a sample of real users to install a browser extension that downloaded the list of videos they were recommended. We instructed these users to start on an assigned video and then click through 20 sets of recommendations, capturing what they were being shown in real time as they used the platform logged into their real accounts. Using a novel method to estimate the ideology of a YouTube video, we demonstrate that the YouTube recommendation algorithm does, in fact, push real users into mild ideological echo chambers where, by the end of the data collection task, liberals and conservatives received different distributions of recommendations from each other, though this difference is small. While we find evidence that this difference increases the longer the user followed the recommendation algorithm, we do not find evidence that many go down `rabbit holes' that lead them to ideologically extreme content. Finally, we find that YouTube pushes all users, regardless of ideology, towards moderately conservative and an increasingly narrow range of ideological content the longer they follow YouTube's recommendations.

    Date Posted

    May 11, 2022

  • Journal Article

    News Credibility Labels Have Limited Average Effects on News Diet Quality and Fail to Reduce Misperceptions

    Science Advances, 2022

    View Article View abstract

    As the primary arena for viral misinformation shifts toward transnational threats, the search continues for scalable countermeasures compatible with principles of transparency and free expression. We conducted a randomized field experiment evaluating the impact of source credibility labels embedded in users’ social feeds and search results pages. By combining representative surveys (n = 3337) and digital trace data (n = 968) from a subset of respondents, we provide a rare ecologically valid test of such an intervention on both attitudes and behavior. On average across the sample, we are unable to detect changes in real-world consumption of news from low-quality sources after 3 weeks. We can also rule out small effects on perceived accuracy of popular misinformation spread about the Black Lives Matter movement and coronavirus disease 2019. However, we present suggestive evidence of a substantively meaningful increase in news diet quality among the heaviest consumers of misinformation. We discuss the implications of our findings for scholars and practitioners.

    Date Posted

    May 06, 2022

  • Journal Article

    Moderating with the Mob: Evaluating the Efficacy of Real-Time Crowdsourced Fact-Checking

    Journal of Online Trust and Safety, 2021

    View Article View abstract

    Reducing the spread of false news remains a challenge for social media platforms, as the current strategy of using third-party fact- checkers lacks the capacity to address both the scale and speed of misinformation diffusion. Research on the “wisdom of the crowds” suggests one possible solution: aggregating the evaluations of ordinary users to assess the veracity of information. In this study, we investigate the effectiveness of a scalable model for real-time crowdsourced fact-checking. We select 135 popular news stories and have them evaluated by both ordinary individuals and professional fact-checkers within 72 hours of publication, producing 12,883 individual evaluations. Although we find that machine learning-based models using the crowd perform better at identifying false news than simple aggregation rules, our results suggest that neither approach is able to perform at the level of professional fact-checkers. Additionally, both methods perform best when using evaluations only from survey respondents with high political knowledge, suggesting reason for caution for crowdsourced models that rely on a representative sample of the population. Overall, our analyses reveal that while crowd-based systems provide some information on news quality, they are nonetheless limited—and have significant variation—in their ability to identify false news.

    Date Posted

    Oct 28, 2021

  • Journal Article

    Twitter Flagged Donald Trump’s Tweets with Election Misinformation: They Continued to Spread Both On and Off the Platform

    Harvard Kennedy School (HKS) Misinformation Review, 2021

    View Article View abstract

    We analyze the spread of Donald Trump’s tweets that were flagged by Twitter using two intervention strategies—attaching a warning label and blocking engagement with the tweet entirely. We find that while blocking engagement on certain tweets limited their diffusion, messages we examined with warning labels spread further on Twitter than those without labels. Additionally, the messages that had been blocked on Twitter remained popular on Facebook, Instagram, and Reddit, being posted more often and garnering more visibility than messages that had either been labeled by Twitter or received no intervention at all. Taken together, our results emphasize the importance of considering content moderation at the ecosystem level.

  • Journal Article

    Cracking Open the News Feed: Exploring What U.S. Facebook Users See and Share with Large-Scale Platform Data

    Journal of Quantitative Description: Digital Media, 2021

    View Article View abstract

    In this study, we analyze for the first time newly available engagement data covering millions of web links shared on Facebook to describe how and by which categories of U.S. users different types of news are seen and shared on the platform. We focus on articles from low-credibility news publishers, credible news sources, purveyors of clickbait, and news specifically about politics, which we identify through a combination of curated lists and supervised classifiers. Our results support recent findings that more fake news is shared by older users and conservatives and that both viewing and sharing patterns suggest a preference for ideologically congenial misinformation. We also find that fake news articles related to politics are more popular among older Americans than other types, while the youngest users share relatively more articles with clickbait headlines. Across the platform, however, articles from credible news sources are shared over 5.5 times more often and viewed over 7.5 times more often than articles from low-credibility sources. These findings offer important context for researchers studying the spread and consumption of information — including misinformation — on social media.

    Date Posted

    Apr 26, 2021

  • Journal Article

    Political Psychology in the Digital (mis)Information age: A Model of News Belief and Sharing

    Social Issues and Policy Review, 2021

    View Article View abstract

    The spread of misinformation, including “fake news,” propaganda, and conspiracy theories, represents a serious threat to society, as it has the potential to alter beliefs, behavior, and policy. Research is beginning to disentangle how and why misinformation is spread and identify processes that contribute to this social problem. We propose an integrative model to understand the social, political, and cognitive psychology risk factors that underlie the spread of misinformation and highlight strategies that might be effective in mitigating this problem. However, the spread of misinformation is a rapidly growing and evolving problem; thus scholars need to identify and test novel solutions, and work with policymakers to evaluate and deploy these solutions. Hence, we provide a roadmap for future research to identify where scholars should invest their energy in order to have the greatest overall impact.

    Date Posted

    Jan 22, 2021

  • Journal Article

    You Won’t Believe Our Results! But They Might: Heterogeneity in Beliefs About the Accuracy of Online Media

    Journal of Experimental Political Science, 2021

    View Article View abstract

    “Clickbait” media has long been espoused as an unfortunate consequence of the rise of digital journalism. But little is known about why readers choose to read clickbait stories. Is it merely curiosity, or might voters think such stories are more likely to provide useful information? We conduct a survey experiment in Italy, where a major political party enthusiastically embraced the esthetics of new media and encouraged their supporters to distrust legacy outlets in favor of online news. We offer respondents a monetary incentive for correct answers to manipulate the relative salience of the motivation for accurate information. This incentive increases differences in the preference for clickbait; older and less educated subjects become even more likely to opt to read a story with a clickbait headline when the incentive to produce a factually correct answer is higher. Our model suggests that a politically relevant subset of the population prefers Clickbait Media because they trust it more.

    Date Posted

    Jan 20, 2021

    Tags

  • Journal Article

    Political Knowledge and Misinformation in the Era of Social Media: Evidence From the 2015 UK Election

    British Journal of Political Science, 2022

    View Article View abstract

    Does social media educate voters, or mislead them? This study measures changes in political knowledge among a panel of voters surveyed during the 2015 UK general election campaign while monitoring the political information to which they were exposed on the Twitter social media platform. The study's panel design permits identification of the effect of information exposure on changes in political knowledge. Twitter use led to higher levels of knowledge about politics and public affairs, as information from news media improved knowledge of politically relevant facts, and messages sent by political parties increased knowledge of party platforms. But in a troubling demonstration of campaigns' ability to manipulate knowledge, messages from the parties also shifted voters' assessments of the economy and immigration in directions favorable to the parties' platforms, leaving some voters with beliefs further from the truth at the end of the campaign than they were at its beginning.

  • Book
  • Journal Article

    Using Social and Behavioral Science to Support COVID-19 Pandemic Response

    • Jay J. Van Bavel
    • Katherine Baicker, 
    • Paulo Boggio, 
    • Valerio Capraro, 
    • Aleksandra Cichocka, 
    • Mina Cikara, 
    • Molly J. Crockett, 
    • Alia Crum, 
    • Karen M. Douglas, 
    • James N. Druckman, 
    • John Drury, 
    • Oeindrila Dube, 
    • Naomi Ellemers, 
    • Eli J. Finkel, 
    • James H. Fowler, 
    • Michele Gelfand, 
    • Shihui Han, 
    • S. Alexander Haslam, 
    • Jolanda Jetten, 
    • Shinobu Kitayama, 
    • Dean Mobbs, 
    • Lucy Napper, 
    • Dominic Packer, 
    • Gordon Pennycook, 
    • Ellen Peters, 
    • Richard E. Petty, 
    • David G. Rand, 
    • Stephen D. Reicher, 
    • Simone Schnall, 
    • Azim Shariff, 
    • Linda Skitka, 
    • Sandra Susan Smith, 
    • Cass R. Sunstein, 
    • Nassim Tabri, 
    • Joshua A. Tucker
    • Sander van der Linden, 
    • Paul A. M. Van Lange, 
    • Kim A. Weeden, 
    • Michael J. A. Wohl, 
    • Jamil Zaki, 
    • Sean R. Zion, 
    • Robb Willer

    Nature Human Behavior, 2020

    View Article View abstract

    The COVID-19 pandemic represents a massive global health crisis. Because the crisis requires large-scale behaviour change and places significant psychological burdens on individuals, insights from the social and behavioural sciences can be used to help align human behaviour with the recommendations of epidemiologists and public health experts. Here we discuss evidence from a selection of research topics relevant to pandemics, including work on navigating threats, social and cultural influences on behaviour, science communication, moral decision-making, leadership, and stress and coping. In each section, we note the nature and quality of prior research, including uncertainty and unsettled issues. We identify several insights for effective response to the COVID-19 pandemic and highlight important gaps researchers should move quickly to fill in the coming weeks and months.

    Date Posted

    Apr 30, 2020

    Tags

  • Journal Article

    Less Than You Think: Prevalence and Predictors of Fake News Dissemination on Facebook

    Science Advances, 2019

    View Article View abstract

    So-called “fake news” has renewed concerns about the prevalence and effects of misinformation in political campaigns. Given the potential for widespread dissemination of this material, we examine the individual-level characteristics associated with sharing false articles during the 2016 U.S. presidential campaign. To do so, we uniquely link an original survey with respondents’ sharing activity as recorded in Facebook profile data. First and foremost, we find that sharing this content was a relatively rare activity. Conservatives were more likely to share articles from fake news domains, which in 2016 were largely pro-Trump in orientation, than liberals or moderates. We also find a strong age effect, which persists after controlling for partisanship and ideology: On average, users over 65 shared nearly seven times as many articles from fake news domains as the youngest age group.

    Date Posted

    Jan 09, 2019

  • Working Paper

    Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature

    Hewlett Foundation, 2018

    View Article View abstract

    The following report is intended to provide an overview of the current state of the literature on the relationship between social media; political polarization; and political “disinformation,” a term used to encompass a wide range of types of information about politics found online, including “fake news,” rumors, deliberately factually incorrect information, inadvertently factually incorrect information, politically slanted information, and “hyperpartisan” news. The review of the literature is provided in six separate sections, each of which can be read individually but that cumulatively are intended to provide an overview of what is known—and unknown—about the relationship between social media, political polarization, and disinformation. The report concludes by identifying key gaps in our understanding of these phenomena and the data that are needed to address them.

    Date Posted

    Mar 19, 2018