Academic Research

As an academic research institute dedicated to studying how social media impacts politics, policy, and democracy, CSMaP publishes peer-reviewed research in top academic journals and produces rigorous data reports on policy relevant topics.

Search or Filter

  • Journal Article

    Election Fraud, YouTube, and Public Perception of the Legitimacy of President Biden

    Journal of Online Trust and Safety, 2022

    View Article View abstract

    Skepticism about the outcome of the 2020 presidential election in the United States led to a historic attack on the Capitol on January 6th, 2021 and represents one of the greatest challenges to America's democratic institutions in over a century. Narratives of fraud and conspiracy theories proliferated over the fall of 2020, finding fertile ground across online social networks, although little is know about the extent and drivers of this spread. In this article, we show that users who were more skeptical of the election's legitimacy were more likely to be recommended content that featured narratives about the legitimacy of the election. Our findings underscore the tension between an "effective" recommendation system that provides users with the content they want, and a dangerous mechanism by which misinformation, disinformation, and conspiracies can find their way to those most likely to believe them.

    Date Posted

    Sep 01, 2022

  • Journal Article

    News Credibility Labels Have Limited Average Effects on News Diet Quality and Fail to Reduce Misperceptions

    Science Advances, 2022

    View Article View abstract

    As the primary arena for viral misinformation shifts toward transnational threats, the search continues for scalable countermeasures compatible with principles of transparency and free expression. We conducted a randomized field experiment evaluating the impact of source credibility labels embedded in users’ social feeds and search results pages. By combining representative surveys (n = 3337) and digital trace data (n = 968) from a subset of respondents, we provide a rare ecologically valid test of such an intervention on both attitudes and behavior. On average across the sample, we are unable to detect changes in real-world consumption of news from low-quality sources after 3 weeks. We can also rule out small effects on perceived accuracy of popular misinformation spread about the Black Lives Matter movement and coronavirus disease 2019. However, we present suggestive evidence of a substantively meaningful increase in news diet quality among the heaviest consumers of misinformation. We discuss the implications of our findings for scholars and practitioners.

    Date Posted

    May 06, 2022

  • Journal Article

    Why Botter: How Pro-Government Bots Fight Opposition in Russia

    American Political Science Review, 2022

    View Article View abstract

    There is abundant anecdotal evidence that nondemocratic regimes are harnessing new digital technologies known as social media bots to facilitate policy goals. However, few previous attempts have been made to systematically analyze the use of bots that are aimed at a domestic audience in autocratic regimes. We develop two alternative theoretical frameworks for predicting the use of pro-regime bots: one which focuses on bot deployment in response to offline protest and the other in response to online protest. We then test the empirical implications of these frameworks with an original collection of Twitter data generated by Russian pro-government bots. We find that the online opposition activities produce stronger reactions from bots than offline protests. Our results provide a lower bound on the effects of bots on the Russian Twittersphere and highlight the importance of bot detection for the study of political communication on social media in nondemocratic regimes.

    Date Posted

    Feb 21, 2022

    Tags

  • Journal Article

    What’s Not to Like? Facebook Page Likes Reveal Limited Polarization in Lifestyle Preferences

    Political Communication, 2021

    View Article View abstract

    Increasing levels of political animosity in the United States invite speculation about whether polarization extends to aspects of daily life. However, empirical study about the relationship between political ideologies and lifestyle choices is limited by a lack of comprehensive data. In this research, we combine survey and Facebook Page “likes” data from more than 1,200 respondents to investigate the extent of polarization in lifestyle domains. Our results indicate that polarization is present in page categories that are somewhat related to politics – such as opinion leaders, partisan news sources, and topics related to identity and religion – but, perhaps surprisingly, it is mostly not evident in other domains, including sports, food, and music. On the individual level, we find that people who are higher in political news interest and have stronger ideological predispositions have a greater tendency to “like” ideologically homogeneous pages across categories. Our evidence, drawn from rare digital trace data covering more than 5,000 pages, adds nuance to the narrative of widespread polarization across lifestyle sectors and it suggests domains in which cross-cutting preferences are still observed in American life.

    Area of Study

    Date Posted

    Nov 25, 2021

  • Journal Article

    Short of Suspension: How Suspension Warnings Can Reduce Hate Speech on Twitter

    Perspectives on Politics, 2021

    View Article View abstract

    Debates around the effectiveness of high-profile Twitter account suspensions and similar bans on abusive users across social media platforms abound. Yet we know little about the effectiveness of warning a user about the possibility of suspending their account as opposed to outright suspensions in reducing hate speech. With a pre-registered experiment, we provide causal evidence that a warning message can reduce the use of hateful language on Twitter, at least in the short term. We design our messages based on the literature on deterrence, and test versions that emphasize the legitimacy of the sender, the credibility of the message, and the costliness of being suspended. We find that the act of warning a user of the potential consequences of their behavior can significantly reduce their hateful language for one week. We also find that warning messages that aim to appear legitimate in the eyes of the target user seem to be the most effective. In light of these findings, we consider the policy implications of platforms adopting a more aggressive approach to warning users that their accounts may be suspended as a tool for reducing hateful speech online.

    Date Posted

    Nov 22, 2021

  • Journal Article

    Moderating with the Mob: Evaluating the Efficacy of Real-Time Crowdsourced Fact-Checking

    Journal of Online Trust and Safety, 2021

    View Article View abstract

    Reducing the spread of false news remains a challenge for social media platforms, as the current strategy of using third-party fact- checkers lacks the capacity to address both the scale and speed of misinformation diffusion. Research on the “wisdom of the crowds” suggests one possible solution: aggregating the evaluations of ordinary users to assess the veracity of information. In this study, we investigate the effectiveness of a scalable model for real-time crowdsourced fact-checking. We select 135 popular news stories and have them evaluated by both ordinary individuals and professional fact-checkers within 72 hours of publication, producing 12,883 individual evaluations. Although we find that machine learning-based models using the crowd perform better at identifying false news than simple aggregation rules, our results suggest that neither approach is able to perform at the level of professional fact-checkers. Additionally, both methods perform best when using evaluations only from survey respondents with high political knowledge, suggesting reason for caution for crowdsourced models that rely on a representative sample of the population. Overall, our analyses reveal that while crowd-based systems provide some information on news quality, they are nonetheless limited—and have significant variation—in their ability to identify false news.

    Date Posted

    Oct 28, 2021

  • Journal Article

    SARS-CoV-2 RNA Concentrations in Wastewater Foreshadow Dynamics and Clinical Presentation of New COVID-19 Cases

    • Fuqing Wu, 
    • Amy Xiao, 
    • Jianbo Zhang, 
    • Katya Moniz, 
    • Noriko Endo, 
    • Federica Armas, 
    • Richard Bonneau
    • Megan A. Brown
    • Mary Bushman, 
    • Peter R. Chai, 
    • Claire Duvallet, 
    • Timothy B. Erickson, 
    • Katelyn Foppe, 
    • Newsha Ghaeli, 
    • Xiaoqiong Gu, 
    • William P. Hanage, 
    • Katherine H. Huang, 
    • Wei Lin Lee, 
    • Mariana Matus, 
    • Kyle A. McElroy, 
    • Jonathan Nagler
    • Steven F. Rhode, 
    • Mauricio Santillana, 
    • Joshua A. Tucker
    • Stefan Wuertz, 
    • Shijie Zhao, 
    • Janelle Thompson, 
    • Eric J. Alm

    Science of the Total Environment, 2022

    View Article View abstract

    Current estimates of COVID-19 prevalence are largely based on symptomatic, clinically diagnosed cases. The existence of a large number of undiagnosed infections hampers population-wide investigation of viral circulation. Here, we quantify the SARS-CoV-2 concentration and track its dynamics in wastewater at a major urban wastewater treatment facility in Massachusetts, between early January and May 2020. SARS-CoV-2 was first detected in wastewater on March 3. SARS-CoV-2 RNA concentrations in wastewater correlated with clinically diagnosed new COVID-19 cases, with the trends appearing 4–10 days earlier in wastewater than in clinical data. We inferred viral shedding dynamics by modeling wastewater viral load as a convolution of back-dated new clinical cases with the average population-level viral shedding function. The inferred viral shedding function showed an early peak, likely before symptom onset and clinical diagnosis, consistent with emerging clinical and experimental evidence. This finding suggests that SARS-CoV-2 concentrations in wastewater may be primarily driven by viral shedding early in infection. This work shows that longitudinal wastewater analysis can be used to identify trends in disease transmission in advance of clinical case reporting, and infer early viral shedding dynamics for newly infected individuals, which are difficult to capture in clinical investigations.

    Area of Study

    Date Posted

    Sep 14, 2021

  • Journal Article

    Twitter Flagged Donald Trump’s Tweets with Election Misinformation: They Continued to Spread Both On and Off the Platform

    Harvard Kennedy School (HKS) Misinformation Review, 2021

    View Article View abstract

    We analyze the spread of Donald Trump’s tweets that were flagged by Twitter using two intervention strategies—attaching a warning label and blocking engagement with the tweet entirely. We find that while blocking engagement on certain tweets limited their diffusion, messages we examined with warning labels spread further on Twitter than those without labels. Additionally, the messages that had been blocked on Twitter remained popular on Facebook, Instagram, and Reddit, being posted more often and garnering more visibility than messages that had either been labeled by Twitter or received no intervention at all. Taken together, our results emphasize the importance of considering content moderation at the ecosystem level.

  • Journal Article

    Testing the Effects of Facebook Usage in an Ethnically Polarized Setting

    Proceedings of the National Academy of Sciences, 2021

    View Article View abstract

    Despite the belief that social media is altering intergroup dynamics—bringing people closer or further alienating them from one another—the impact of social media on interethnic attitudes has yet to be rigorously evaluated, especially within areas with tenuous interethnic relations. We report results from a randomized controlled trial in Bosnia and Herzegovina (BiH), exploring the effects of exposure to social media during 1 wk around genocide remembrance in July 2019 on a set of interethnic attitudes of Facebook users. We find evidence that, counter to preregistered expectations, people who deactivated their Facebook profiles report lower regard for ethnic outgroups than those who remained active. Moreover, we present additional evidence suggesting that this effect is likely conditional on the level of ethnic heterogeneity of respondents’ residence. We also extend the analysis to include measures of subjective well-being and knowledge of news. Here, we find that Facebook deactivation leads to suggestive improvements in subjective wellbeing and a decrease in knowledge of current events, replicating results from recent research in the United States in a very different context, thus increasing our confidence in the generalizability of these effects.

    Area of Study

    Date Posted

    Jun 22, 2021

  • Journal Article

    Accessibility and Generalizability: Are Social Media Effects Moderated by Age or Digital Literacy?

    Research & Politics, 2021

    View Article View abstract

    An emerging empirical regularity suggests that older people use and respond to social media very differently than younger people. Older people are the fastest-growing population of Internet and social media users in the U.S., and this heterogeneity will soon become central to online politics. However, many important experiments in this field have been conducted on online samples that do not contain enough older people to be useful to generalize to the current population of Internet users; this issue is more pronounced for studies that are even a few years old. In this paper, we report the results of replicating two experiments involving social media (specifically, Facebook) conducted on one such sample lacking older users (Amazon’s Mechanical Turk) using a source of online subjects which does contain sufficient variation in subject age. We add a standard battery of questions designed to explicitly measure digital literacy. We find evidence of significant treatment effect heterogeneity in subject age and digital literacy in the replication of one of the two experiments. This result is an example of limitations to generalizability of research conducted on samples where selection is related to treatment effect heterogeneity; specifically, this result indicates that Mechanical Turk should not be used to recruit subjects when researchers suspect treatment effect heterogeneity in age or digital literacy, as we argue should be the case for research on digital media effects.

    Area of Study

    Date Posted

    Jun 09, 2021

  • Journal Article

    Cracking Open the News Feed: Exploring What U.S. Facebook Users See and Share with Large-Scale Platform Data

    Journal of Quantitative Description: Digital Media, 2021

    View Article View abstract

    In this study, we analyze for the first time newly available engagement data covering millions of web links shared on Facebook to describe how and by which categories of U.S. users different types of news are seen and shared on the platform. We focus on articles from low-credibility news publishers, credible news sources, purveyors of clickbait, and news specifically about politics, which we identify through a combination of curated lists and supervised classifiers. Our results support recent findings that more fake news is shared by older users and conservatives and that both viewing and sharing patterns suggest a preference for ideologically congenial misinformation. We also find that fake news articles related to politics are more popular among older Americans than other types, while the youngest users share relatively more articles with clickbait headlines. Across the platform, however, articles from credible news sources are shared over 5.5 times more often and viewed over 7.5 times more often than articles from low-credibility sources. These findings offer important context for researchers studying the spread and consumption of information — including misinformation — on social media.

    Date Posted

    Apr 26, 2021

  • Journal Article

    The Times They Are Rarely A-Changin': Circadian Regularities in Social Media Use

    Journal of Quantitative Description: Digital Media, 2021

    View Article View abstract

    This paper uses geolocated Twitter histories from approximately 25,000 individuals in 6 different time zones and 3 different countries to construct a proper time-zone dependent hourly baseline for social media activity studies.  We establish that, across multiple regions and time periods, interaction with social media is strongly conditioned by traditional bio-rhythmic or “Circadian” patterns, and that in the United States, this pattern is itself further conditioned by the ideological bent of the user. Using a time series of these histories around the 2016 U.S. Presidential election, we show that external events of great significance can disrupt traditional social media activity patterns, and that this disruption can be significant (in some cases doubling the amplitude and shifting the phase of activity up to an hour). We find that the disruption of use patterns can last an extended period of time, and in many cases, aspects of this disruption would not be detected without a circadian baseline.

    Area of Study

    Date Posted

    Apr 26, 2021

  • Journal Article

    YouTube Recommendations and Effects on Sharing Across Online Social Platforms

    Proceedings of the ACM on Human-Computer Interaction, 2021

    View Article View abstract

    In January 2019, YouTube announced it would exclude potentially harmful content from video recommendations but allow such videos to remain on the platform. While this step intends to reduce YouTube's role in propagating such content, continued availability of these videos in other online spaces makes it unclear whether this compromise actually reduces their spread. To assess this impact, we apply interrupted time series models to measure whether different types of YouTube sharing in Twitter and Reddit changed significantly in the eight months around YouTube's announcement. We evaluate video sharing across three curated sets of potentially harmful, anti-social content: a set of conspiracy videos that have been shown to experience reduced recommendations in YouTube, a larger set of videos posted by conspiracy-oriented channels, and a set of videos posted by alternative influence network (AIN) channels. As a control, we also evaluate effects on video sharing in a dataset of videos from mainstream news channels. Results show conspiracy-labeled and AIN videos that have evidence of YouTube's de-recommendation experience a significant decreasing trend in sharing on both Twitter and Reddit. For videos from conspiracy-oriented channels, however, we see no significant effect in Twitter but find a significant increase in the level of conspiracy-channel sharing in Reddit. For mainstream news sharing, we actually see an increase in trend on both platforms, suggesting YouTube's suppressing particular content types has a targeted effect. This work finds evidence that reducing exposure to anti-social videos within YouTube, without deletion, has potential pro-social, cross-platform effects. At the same time, increases in the level of conspiracy-channel sharing raise concerns about content producers' responses to these changes, and platform transparency is needed to evaluate these effects further.

    Date Posted

    Apr 22, 2021

  • Journal Article

    Tweeting Beyond Tahrir: Ideological Diversity and Political Intolerance in Egyptian Twitter Networks

    World Politics, 2021

    View Article View abstract

    Do online social networks affect political tolerance in the highly polarized climate of postcoup Egypt? Taking advantage of the real-time networked structure of Twitter data, the authors find that not only is greater network diversity associated with lower levels of intolerance, but also that longer exposure to a diverse network is linked to less expression of intolerance over time. The authors find that this relationship persists in both elite and non-elite diverse networks. Exploring the mechanisms by which network diversity might affect tolerance, the authors offer suggestive evidence that social norms in online networks may shape individuals’ propensity to publicly express intolerant attitudes. The findings contribute to the political tolerance literature and enrich the ongoing debate over the relationship between online echo chambers and political attitudes and behavior by providing new insights from a repressive authoritarian context.

  • Journal Article

    Political Psychology in the Digital (mis)Information age: A Model of News Belief and Sharing

    Social Issues and Policy Review, 2021

    View Article View abstract

    The spread of misinformation, including “fake news,” propaganda, and conspiracy theories, represents a serious threat to society, as it has the potential to alter beliefs, behavior, and policy. Research is beginning to disentangle how and why misinformation is spread and identify processes that contribute to this social problem. We propose an integrative model to understand the social, political, and cognitive psychology risk factors that underlie the spread of misinformation and highlight strategies that might be effective in mitigating this problem. However, the spread of misinformation is a rapidly growing and evolving problem; thus scholars need to identify and test novel solutions, and work with policymakers to evaluate and deploy these solutions. Hence, we provide a roadmap for future research to identify where scholars should invest their energy in order to have the greatest overall impact.

    Date Posted

    Jan 22, 2021

  • Journal Article

    You Won’t Believe Our Results! But They Might: Heterogeneity in Beliefs About the Accuracy of Online Media

    Journal of Experimental Political Science, 2021

    View Article View abstract

    “Clickbait” media has long been espoused as an unfortunate consequence of the rise of digital journalism. But little is known about why readers choose to read clickbait stories. Is it merely curiosity, or might voters think such stories are more likely to provide useful information? We conduct a survey experiment in Italy, where a major political party enthusiastically embraced the esthetics of new media and encouraged their supporters to distrust legacy outlets in favor of online news. We offer respondents a monetary incentive for correct answers to manipulate the relative salience of the motivation for accurate information. This incentive increases differences in the preference for clickbait; older and less educated subjects become even more likely to opt to read a story with a clickbait headline when the incentive to produce a factually correct answer is higher. Our model suggests that a politically relevant subset of the population prefers Clickbait Media because they trust it more.

    Date Posted

    Jan 20, 2021

    Tags

  • Journal Article

    Trumping Hate on Twitter? Online Hate Speech in the 2016 U.S. Election Campaign and its Aftermath.

    Quarterly Journal of Political Science, 2021

    View Article View abstract

    To what extent did online hate speech and white nationalist rhetoric on Twitter increase over the course of Donald Trump's 2016 presidential election campaign and its immediate aftermath? The prevailing narrative suggests that Trump's political rise — and his unexpected victory — lent legitimacy to and popularized bigoted rhetoric that was once relegated to the dark corners of the Internet. However, our analysis of over 750 million tweets related to the election, in addition to almost 400 million tweets from a random sample of American Twitter users, provides systematic evidence that hate speech did not increase on Twitter over this period. Using both machine-learning-augmented dictionary-based methods and a novel classification approach leveraging data from Reddit communities associated with the alt-right movement, we observe no persistent increase in hate speech or white nationalist language either over the course of the campaign or in the six months following Trump's election. While key campaign events and policy announcements produced brief spikes in hateful language, these bursts quickly dissipated. Overall we find no empirical support for the proposition that Trump's divisive campaign or election increased hate speech on Twitter.

    Date Posted

    Jan 11, 2021

  • Journal Article

    Political Knowledge and Misinformation in the Era of Social Media: Evidence From the 2015 UK Election

    British Journal of Political Science, 2022

    View Article View abstract

    Does social media educate voters, or mislead them? This study measures changes in political knowledge among a panel of voters surveyed during the 2015 UK general election campaign while monitoring the political information to which they were exposed on the Twitter social media platform. The study's panel design permits identification of the effect of information exposure on changes in political knowledge. Twitter use led to higher levels of knowledge about politics and public affairs, as information from news media improved knowledge of politically relevant facts, and messages sent by political parties increased knowledge of party platforms. But in a troubling demonstration of campaigns' ability to manipulate knowledge, messages from the parties also shifted voters' assessments of the economy and immigration in directions favorable to the parties' platforms, leaving some voters with beliefs further from the truth at the end of the campaign than they were at its beginning.

  • Journal Article

    Political Sectarianism in America

    • Eli J. Finkel, 
    • Christopher A. Bail, 
    • Mina Cikara, 
    • Peter H. Ditto, 
    • Shanto Iyengar, 
    • Samara Klar, 
    • Lilliana Mason, 
    • Mary C. McGrath, 
    • Brendan Nyhan, 
    • David G. Rand, 
    • Linda J. Skitka, 
    • Joshua A. Tucker
    • Jay J. Van Bavel
    • Cynthia S. Wang, 
    • James N. Druckman

    Science, 2020

    View Article View abstract

    Political polarization, a concern in many countries, is especially acrimonious in the United States. For decades, scholars have studied polarization as an ideological matter — how strongly Democrats and Republicans diverge vis-à-vis political ideals and policy goals. Such competition among groups in the marketplace of ideas is a hallmark of a healthy democracy. But more recently, researchers have identified a second type of polarization, one focusing less on triumphs of ideas than on dominating the abhorrent supporters of the opposing party. This literature has produced a proliferation of insights and constructs but few interdisciplinary efforts to integrate them. We offer such an integration, pinpointing the superordinate construct of political sectarianism and identifying its three core ingredients: othering, aversion, and moralization. We then consider the causes of political sectarianism and its consequences for U.S. society — especially the threat it poses to democracy. Finally, we propose interventions for minimizing its most corrosive aspects.

    Area of Study

    Date Posted

    Oct 30, 2020

  • Journal Article

    Content-Based Features Predict Social Media Influence Operations

    Science Advances, 2020

    View Article View abstract

    We study how easy it is to distinguish influence operations from organic social media activity by assessing the performance of a platform-agnostic machine learning approach. Our method uses public activity to detect content that is part of coordinated influence operations based on human-interpretable features derived solely from content. We test this method on publicly available Twitter data on Chinese, Russian, and Venezuelan troll activity targeting the United States, as well as the Reddit dataset of Russian influence efforts. To assess how well content-based features distinguish these influence operations from random samples of general and political American users, we train and test classifiers on a monthly basis for each campaign across five prediction tasks. Content-based features perform well across period, country, platform, and prediction task. Industrialized production of influence campaign content leaves a distinctive signal in user-generated content that allows tracking of campaigns from month to month and across different accounts.

    Date Posted

    Jul 22, 2020

  • Journal Article

    Cross-Platform State Propaganda: Russian Trolls on Twitter and YouTube During the 2016 U.S. Presidential Election

    The International Journal of Press/Politics, 2020

    View Article View abstract

    This paper investigates online propaganda strategies of the Internet Research Agency (IRA)—Russian “trolls”—during the 2016 U.S. presidential election. We assess claims that the IRA sought either to (1) support Donald Trump or (2) sow discord among the U.S. public by analyzing hyperlinks contained in 108,781 IRA tweets. Our results show that although IRA accounts promoted links to both sides of the ideological spectrum, “conservative” trolls were more active than “liberal” ones. The IRA also shared content across social media platforms, particularly YouTube—the second-most linked destination among IRA tweets. Although overall news content shared by trolls leaned moderate to conservative, we find troll accounts on both sides of the ideological spectrum, and these accounts maintain their political alignment. Links to YouTube videos were decidedly conservative, however. While mixed, this evidence is consistent with the IRA’s supporting the Republican campaign, but the IRA’s strategy was multifaceted, with an ideological division of labor among accounts. We contextualize these results as consistent with a pre-propaganda strategy. This work demonstrates the need to view political communication in the context of the broader media ecology, as governments exploit the interconnected information ecosystem to pursue covert propaganda strategies.

    Date Posted

    Jul 01, 2020

  • Journal Article

    Automated Text Classification of News Articles: A Practical Guide

    Political Analysis, 2021

    View Article View abstract

    Automated text analysis methods have made possible the classification of large corpora of text by measures such as topic and tone. Here, we provide a guide to help researchers navigate the consequential decisions they need to make before any measure can be produced from the text. We consider, both theoretically and empirically, the effects of such choices using as a running example efforts to measure the tone of New York Times coverage of the economy. We show that two reasonable approaches to corpus selection yield radically different corpora and we advocate for the use of keyword searches rather than predefined subject categories provided by news archives. We demonstrate the benefits of coding using article segments instead of sentences as units of analysis. We show that, given a fixed number of codings, it is better to increase the number of unique documents coded rather than the number of coders for each document. Finally, we find that supervised machine learning algorithms outperform dictionaries on a number of criteria. Overall, we intend this guide to serve as a reminder to analysts that thoughtfulness and human validation are key to text-as-data methods, particularly in an age when it is all too easy to computationally classify texts without attending to the methodological choices therein.

    Area of Study

    Date Posted

    Jun 09, 2020

  • Journal Article

    The (Null) Effects of Clickbait Headlines on Polarization, Trust, and Learning

    Public Opinion Quarterly, 2020

    View Article View abstract

    “Clickbait” headlines designed to entice people to click are frequently used by both legitimate and less-than-legitimate news sources. Contemporary clickbait headlines tend to use emotional partisan appeals, raising concerns about their impact on consumers of online news. This article reports the results of a pair of experiments with different sets of subject pools: one conducted using Facebook ads that explicitly target people with a high preference for clickbait, the other using a sample recruited from Amazon’s Mechanical Turk. We estimate subjects’ individual-level preference for clickbait, and randomly assign sets of subjects to read either clickbait or traditional headlines. Findings show that older people and non-Democrats have a higher “preference for clickbait,” but reading clickbait headlines does not drive affective polarization, information retention, or trust in media.

    Area of Study

    Date Posted

    Apr 30, 2020

  • Journal Article

    Using Social and Behavioral Science to Support COVID-19 Pandemic Response

    • Jay J. Van Bavel
    • Katherine Baicker, 
    • Paulo S. Boggio, 
    • Valerio Capraro, 
    • Aleksandra Cichocka, 
    • Mina Cikara, 
    • Molly J. Crockett, 
    • Alia J. Crum, 
    • Karen M. Douglas, 
    • James N. Druckman, 
    • John Drury, 
    • Oeindrila Dube, 
    • Naomi Ellemers, 
    • Eli J. Finkel, 
    • James H. Fowler, 
    • Michele Gelfand, 
    • Shihui Han, 
    • S. Alexander Haslam, 
    • Jolanda Jetten, 
    • Shinobu Kitayama, 
    • Dean Mobbs, 
    • Lucy E. Napper, 
    • Dominic J. Packer, 
    • Gordon Pennycook, 
    • Ellen Peters, 
    • Richard E. Petty, 
    • David G. Rand, 
    • Stephen D. Reicher, 
    • Simone Schnall, 
    • Azim Shariff, 
    • Linda J. Skitka, 
    • Sandra Susan Smith, 
    • Cass R. Sunstein, 
    • Nassim Tabri, 
    • Joshua A. Tucker
    • Sander van der Linden, 
    • Paul van Lange, 
    • Kim A. Weeden, 
    • Michael J. A. Wohl, 
    • Jamil Zaki, 
    • Sean R. Zion, 
    • Robb Willer

    Nature Human Behavior, 2020

    View Article View abstract

    The COVID-19 pandemic represents a massive global health crisis. Because the crisis requires large-scale behaviour change and places significant psychological burdens on individuals, insights from the social and behavioural sciences can be used to help align human behaviour with the recommendations of epidemiologists and public health experts. Here we discuss evidence from a selection of research topics relevant to pandemics, including work on navigating threats, social and cultural influences on behaviour, science communication, moral decision-making, leadership, and stress and coping. In each section, we note the nature and quality of prior research, including uncertainty and unsettled issues. We identify several insights for effective response to the COVID-19 pandemic and highlight important gaps researchers should move quickly to fill in the coming weeks and months.

    Date Posted

    Apr 30, 2020

    Tags

  • Journal Article

    Political Psycholinguistics: A Comprehensive Analysis of the Language Habits of Liberal and Conservative Social Media Users.

    Journal of Personality and Social Psychology, 2020

    View Article View abstract

    For nearly a century social scientists have sought to understand left–right ideological differences in values, motives, and thinking styles. Much progress has been made, but — as in other areas of research — this work has been criticized for relying on small and statistically unrepresentative samples and the use of reactive, self-report measures that lack ecological validity. In an effort to overcome these limitations, we employed automated text analytic methods to investigate the spontaneous, naturally occurring use of language in nearly 25,000 Twitter users. We derived 27 hypotheses from the literature on political psychology and tested them using 32 individual dictionaries. In 23 cases, we observed significant differences in the linguistic styles of liberals and conservatives. For instance, liberals used more language that conveyed benevolence, whereas conservatives used more language pertaining to threat, power, tradition, resistance to change, certainty, security, anger, anxiety, and negative emotion in general. In 17 cases, there were also significant effects of ideological extremity. For instance, moderates used more benevolent language, whereas extremists used more language pertaining to inhibition, tentativeness, affiliation, resistance to change, certainty, security, anger, anxiety, negative affect, swear words, and death-related language. These research methods, which are easily adaptable, open up new and unprecedented opportunities for conducting unobtrusive research in psycholinguistics and political psychology with large and diverse samples.

    Date Posted

    Jan 09, 2020