Academic Research

CSMaP faculty, postdoctoral fellows, and students publish rigorous, peer-reviewed research in top academic journals and post working papers sharing ongoing work.

Search or Filter

  • Journal Article

    How Reliance on Spanish-Language Social Media Predicts Beliefs in False Political Narratives Amongst Latinos

    PNAS Nexus, 2024

    View Article View abstract

    False political narratives are nearly inescapable on social media in the United States. They are a particularly acute problem for Latinos, and especially for those who rely on Spanish-language social media for news and information. Studies have shown that Latinos are vulnerable to misinformation because they rely more heavily on social media and messaging platforms than non-Hispanic whites. Moreover, fact-checking algorithms are not as robust in Spanish as they are in English, and social media platforms put far more effort into combating misinformation on English-language media than Spanish-language media, which compounds the likelihood of being exposed to misinformation. As a result, we expect that Latinos who use Spanish-language social media to be more likely to believe in false political narratives when compared with Latinos who primarily rely on English-language social media for news. To test this expectation, we fielded the largest online survey to date of social media usage and belief in political misinformation of Latinos. Our study, fielded in the months leading up to and following the 2022 midterm elections, examines a variety of false political narratives that were circulating in both Spanish and English on social media. We find that social media reliance for news predicts one’s belief in false political stories, and that Latinos who use Spanish-language social media have a higher probability of believing in false political narratives, compared with Latinos using English-language social media.

  • Journal Article

    News Sharing on Social Media: Mapping the Ideology of News Media, Politicians, and the Mass Public

    Political Analysis, 2024

    View Article View abstract

    This article examines the information sharing behavior of U.S. politicians and the mass public by mapping the ideological sharing space of political news on social media. As data, we use the near-universal currency of online information exchange: web links. We introduce a methodological approach and software to unify the measurement of ideology across social media platforms by using sharing data to jointly estimate the ideology of news media organizations, politicians, and the mass public. Empirically, we show that (1) politicians who share ideologically polarized content share, by far, the most political news and commentary and (2) that the less competitive elections are, the more likely politicians are to share polarized information. These results demonstrate that news and commentary shared by politicians come from a highly unrepresentative set of ideologically extreme legislators and that decreases in election pressures (e.g., by gerrymandering) may encourage polarized sharing behavior.

  • Working Paper

    Understanding Latino Political Engagement and Activity on Social Media

    Working Paper, November 2024

    View Article View abstract

    Social media is used by millions of Americans to access news and politics. Yet there are no studies, to date, examining whether these behaviors systematically vary for those whose political incorporation process is distinct from those in the majority. We fill this void by examining how Latino online political activity compares to that of white Americans and the role of language in Latinos’ online political engagement. We hypothesize that Latino online political activity is comparable to white Americans. Moreover, given media reports suggesting that greater quantities of political misinformation are circulating on Spanish versus English-language social media, we expect that reliance on Spanish-language social media for news predicts beliefs in inaccurate political narratives. Our survey findings, which we believe to be the largest original survey of the online political activity of Latinos and whites, reveal support for these expectations. Latino social media political activity, as measured by sharing/viewing news, talking about politics, and following politicians, is comparable to whites, both in self-reported and digital trace data. Latinos also turned to social media for news about COVID-19 more often than did whites. Finally, Latinos relying on Spanish-language social media usage for news predicts beliefs in election fraud in the 2020 U.S. Presidential election.

  • Journal Article

    The Trump Advantage in Policy Recall Among Voters

    American Politics Research, 2024

    View Article View abstract

    Research in political science suggests campaigns have a minimal effect on voters’ attitudes and vote choice. We evaluate the effectiveness of the 2016 Trump and Clinton campaigns at informing voters by giving respondents an opportunity to name policy positions of candidates that they felt would make them better off. The relatively high rates of respondents’ ability to name a Trump policy that would make them better off suggests that the success of his campaign can be partly attributed to its ability to communicate memorable information. Our evidence also suggests that cable television informed voters: respondents exposed to higher levels of liberal news were more likely to be able to name Clinton policies, and voters exposed to higher levels of conservative news were more likely to name Trump policies; these effects hold even conditioning on respondents’ ideology and exposure to mainstream media. Our results demonstrate the advantages of using novel survey questions and provide additional insights into the 2016 campaign that challenge one part of the conventional narrative about the presumed non-importance of operational ideology.

    Date Posted

    Oct 30, 2024

  • Journal Article

    Measuring Receptivity to Misinformation at Scale on a Social Media Platform

    PNAS Nexus, 2024

    View Article View abstract

    Measuring the impact of online misinformation is challenging. Traditional measures, such as user views or shares on social media, are incomplete because not everyone who is exposed to misinformation is equally likely to believe it. To address this issue, we developed a method that combines survey data with observational Twitter data to probabilistically estimate the number of users both exposed to and likely to believe a specific news story. As a proof of concept, we applied this method to 139 viral news articles and find that although false news reaches an audience with diverse political views, users who are both exposed and receptive to believing false news tend to have more extreme ideologies. These receptive users are also more likely to encounter misinformation earlier than those who are unlikely to believe it. This mismatch between overall user exposure and receptive user exposure underscores the limitation of relying solely on exposure or interaction data to measure the impact of misinformation, as well as the challenge of implementing effective interventions. To demonstrate how our approach can address this challenge, we then conducted data-driven simulations of common interventions used by social media platforms. We find that these interventions are only modestly effective at reducing exposure among users likely to believe misinformation, and their effectiveness quickly diminishes unless implemented soon after misinformation’s initial spread. Our paper provides a more precise estimate of misinformation’s impact by focusing on the exposure of users likely to believe it, offering insights for effective mitigation strategies on social media.

  • Journal Article

    A Multi-Stakeholder Approach for Leveraging Data Portability to Support Research on the Digital Information Environment

    Journal of Online Trust and Safety, 2024

    View Article View abstract

    In this paper, we aim to situate data portability within the evolving discussions of how to support data access for researchers studying the digital information environment. We explore how data donations, enabled by existing data access rights and data portability requirements, provide promising opportunities for supporting research on critical trust and safety topics. Evaluating other data access mechanisms that are more central to policy debates about platform transparency, we argue that data donations are a powerful additional mechanism that offer key legal, ethical, and scientific benefits. We then assess current challenges with using data donations for research and offer recommendations for various stakeholders to better align portability mechanisms with the needs of research. Taken together, we argue that although portability is often considered within a context of competition and user agency, regulators, industry actors, and researchers should understand and leverage portability’s potential impact to empower critical research on the societal impacts of digital platforms and services.

    Area of Study

    Date Posted

    Sep 18, 2024

  • Working Paper

    Survey Professionalism: New Evidence from Web Browsing Data

    Working Paper, August 2024

    View Article View abstract

    Online panels have become an important resource for research in political science, but the financial compensation involved incentivizes respondents to become “survey professionals”, which raises concerns about data quality. We provide evidence on survey professionalism using behavioral web browsing data from three U.S. samples, recruited via Lucid, YouGov, and Facebook (total n = 3,886). Survey professionalism is common but varies across samples: By our most conservative measure, we identify 1.7% of respondents on Facebook, 7.9% of respondents on YouGov, and 34.3% of respondents on Lucid as survey professionals. However, evidence that professionals lower data quality is limited: they do not systematically differ demographically or politically from non-professionals and do not respond more randomly—although they are somewhat more likely to speed, to straightline, and to take questionnaires repeatedly. While concerns are warranted, we conclude that survey professionals do not, by and large, distort inferences of research based on online panels.

    Date Posted

    Aug 30, 2024

  • Working Paper

    Reaching Across the Political Aisle: Overcoming Challenges in Using Social Media for Recruiting Politically Diverse Respondents

    Working Paper, August 2024

    View Article View abstract

    A challenge for public opinion surveys is achieving representativeness of respondents across demographic groups. We test the extent to which ideological alignment with a survey’s sponsor shapes differential partisan response and users’ choice of whether to participate in a research study on Facebook. While the use of Facebook advertisements for recruitment has increased in recent years and offers potential benefits, it can yield difficulties in recruiting politically representative samples. We recruit respondents for a short survey through two otherwise identical advertisements associated with either New York University (from a liberal state) or the University of Mississippi (from a conservative state). Contrary to our expectations, we don’t find an asymmetry in completion rates between self-reported Democrats and Republicans based on the survey sponsor. Nor do we find statistically significant differences in attitudes of respondents across the two survey sponsors when we control for observables.

    Date Posted

    Aug 13, 2024

  • Journal Article

    Digital Town Square? Nextdoor's Offline Contexts and Online Discourse

    Journal of Quantitative Description: Digital Media, 2024

    View Article View abstract

    There is scant quantitative research describing Nextdoor, the world's largest and most important hyperlocal social media network. Due to its localized structure, Nextdoor data are notoriously difficult to collect and work with. We build multiple datasets that allow us to generate descriptive analyses of the platform's offline contexts and online content. We first create a comprehensive dataset of all Nextdoor neighborhoods joined with U.S. Census data, which we analyze at the community-level (block-group). Our findings suggests that Nextdoor is primarily used in communities where the populations are whiter, more educated, more likely to own a home, and with higher levels of average income, potentially impacting the platform's ability to create new opportunities for social capital formation and citizen engagement. At the same time, Nextdoor neighborhoods are more likely to have active government agency accounts---and law enforcement agencies in particular---where offline communities are more urban, have larger nonwhite populations, greater income inequality, and higher average home values. We then build a convenience sample of 30 Nextdoor neighborhoods, for which we collect daily posts and comments appearing in the feed (115,716 posts and 163,903 comments), as well as associated metadata. Among the accounts for which we collected posts and comments, posts seeking or offering services were the most frequent, while those reporting potentially suspicious people or activities received the highest average number of comments. Taken together, our study describes the ecosystem of and discussion on Nextdoor, as well as introduces data for quantitatively studying the platform.

    Date Posted

    May 29, 2024

  • Working Paper

    Misinformation Exposure Beyond Traditional Feeds: Evidence from a WhatsApp Deactivation Experiment in Brazil

    Working Paper, May 2024

    View Article View abstract

    In most advanced democracies, concerns about the spread of misinformation are typically associated with feed-based social media platforms like Twitter and Facebook. These platforms also account for the vast majority of research on the topic. However, in most of the world, particularly in Global South countries, misinformation often reaches citizens through social media messaging apps, particularly WhatsApp. To fill the resulting gap in the literature, we conducted a multimedia deactivation experiment to test the impact of reducing exposure to potential sources of misinformation on WhatsApp during the weeks leading up to the 2022 Presidential election in Brazil. We find that this intervention significantly reduced participants’ exposure to false rumors circulating widely during the election. However, consistent with theories of mass media minimal effects, a short-term reduction in exposure to misinformation ahead of the election did not lead to significant changes in belief accuracy, political polarization, or well-being.

  • Journal Article

    The Effects of Facebook and Instagram on the 2020 Election: A Deactivation Experiment

    • Hunt Alcott, 
    • Matthew Gentzkow, 
    • Winter Mason, 
    • Arjun Wilkins, 
    • Pablo Barberá
    • Taylor Brown, 
    • Juan Carlos Cisneros, 
    • Adriana Crespo-Tenorio, 
    • Drew Dimmery, 
    • Deen Freelon, 
    • Sandra González-Bailón
    • Andrew M. Guess
    • Young Mie Kim, 
    • David Lazer, 
    • Neil Malhotra, 
    • Devra Moehler, 
    • Sameer Nair-Desai, 
    • Houda Nait El Barj, 
    • Brendan Nyhan, 
    • Ana Carolina Paixao de Queiroz, 
    • Jennifer Pan, 
    • Jaime Settle, 
    • Emily Thorson, 
    • Rebekah Tromble, 
    • Carlos Velasco Rivera, 
    • Benjamin Wittenbrink, 
    • Magdalena Wojcieszak
    • Saam Zahedian, 
    • Annie Franco, 
    • Chad Kiewiet De Jong, 
    • Natalie Jomini Stroud, 
    • Joshua A. Tucker

    Proceedings of the National Academy of Sciences, 2024

    View Article View abstract

    We study the effect of Facebook and Instagram access on political beliefs, attitudes, and behavior by randomizing a subset of 19,857 Facebook users and 15,585 Instagram users to deactivate their accounts for 6 wk before the 2020 U.S. election. We report four key findings. First, both Facebook and Instagram deactivation reduced an index of political participation (driven mainly by reduced participation online). Second, Facebook deactivation had no significant effect on an index of knowledge, but secondary analyses suggest that it reduced knowledge of general news while possibly also decreasing belief in misinformation circulating online. Third, Facebook deactivation may have reduced self-reported net votes for Trump, though this effect does not meet our preregistered significance threshold. Finally, the effects of both Facebook and Instagram deactivation on affective and issue polarization, perceived legitimacy of the election, candidate favorability, and voter turnout were all precisely estimated and close to zero.

  • Book

    Online Data and the Insurrection

    Media and January 6th, 2024

    View Book View abstract

    Online data is key to understanding the leadup to the January 6 insurrection, including how and why election fraud conspiracies spread online, how conspiracy groups organized online to participate in the insurrection, and other factors of online life that led to the insurrection. However, there are significant challenges in accessing data for this research. First, platforms restrict which researchers get access to data, as well as what researchers can do with the data they access. Second, this data is ephemeral; that is, once users or the platform remove the data, researchers can no longer access it. These factors affect what research questions can ever be asked and answered.

  • Journal Article

    Estimating the Ideology of Political YouTube Videos

    Political Analysis, 2024

    View Article View abstract

    We present a method for estimating the ideology of political YouTube videos. As online media increasingly influences how people engage with politics, so does the importance of quantifying the ideology of such media for research. The subfield of estimating ideology as a latent variable has often focused on traditional actors such as legislators, while more recent work has used social media data to estimate the ideology of ordinary users, political elites, and media sources. We build on this work by developing a method to estimate the ideologies of YouTube videos, an important subset of media, based on their accompanying text metadata. First, we take Reddit posts linking to YouTube videos and use correspondence analysis to place those videos in an ideological space. We then train a text-based model with those estimated ideologies as training labels, enabling us to estimate the ideologies of videos not posted on Reddit. These predicted ideologies are then validated against human labels. Finally, we demonstrate the utility of this method by applying it to the watch histories of survey respondents with self-identified ideologies to evaluate the prevalence of echo chambers on YouTube. Our approach gives video-level scores based only on supplied text metadata, is scalable, and can be easily adjusted to account for changes in the ideological climate. This method could also be generalized to estimate the ideology of other items referenced or posted on Reddit.

    Date Posted

    Feb 13, 2024

  • Journal Article

    Online Searches to Evaluate Misinformation Can Increase its Perceived Veracity

    Nature, 2024

    View Article View abstract

    Considerable scholarly attention has been paid to understanding belief in online misinformation, with a particular focus on social networks. However, the dominant role of search engines in the information environment remains underexplored, even though the use of online search to evaluate the veracity of information is a central component of media literacy interventions. Although conventional wisdom suggests that searching online when evaluating misinformation would reduce belief in it, there is little empirical evidence to evaluate this claim. Here, across five experiments, we present consistent evidence that online search to evaluate the truthfulness of false news articles actually increases the probability of believing them. To shed light on this relationship, we combine survey data with digital trace data collected using a custom browser extension. We find that the search effect is concentrated among individuals for whom search engines return lower-quality information. Our results indicate that those who search online to evaluate misinformation risk falling into data voids, or informational spaces in which there is corroborating evidence from low-quality sources. We also find consistent evidence that searching online to evaluate news increases belief in true news from low-quality sources, but inconsistent evidence that it increases belief in true news from mainstream sources. Our findings highlight the need for media literacy programmes to ground their recommendations in empirically tested strategies and for search engines to invest in solutions to the challenges identified here.

    Date Posted

    Dec 20, 2023

  • Journal Article

    A Synthesis of Evidence for Policy from Behavioural Science During COVID-19

    • Kai Ruggeri, 
    • Friederike Stock, 
    • S. Alexander Haslam, 
    • Valerio Capraro, 
    • Paulo Boggio, 
    • Naomi Ellemers, 
    • Aleksandra Cichocka, 
    • Karen M. Douglas, 
    • David G. Rand, 
    • Sander van der Linden, 
    • Mina Cikara, 
    • Eli J. Finkel, 
    • James N. Druckman, 
    • Michael J. A. Wohl, 
    • Richard E. Petty, 
    • Joshua A. Tucker
    • Azim Shariff, 
    • Michele Gelfand, 
    • Dominic Packer, 
    • Jolanda Jetten, 
    • Paul A. M. Van Lange, 
    • Gordon Pennycook, 
    • Ellen Peters, 
    • Katherine Baicker, 
    • Alia Crum, 
    • Kim A. Weeden, 
    • Lucy Napper, 
    • Nassim Tabri, 
    • Jamil Zaki, 
    • Linda Skitka, 
    • Shinobu Kitayama, 
    • Dean Mobbs, 
    • Cass R. Sunstein, 
    • Sarah Ashcroft-Jones, 
    • Anna Louise Todsen, 
    • Ali Hajian, 
    • Sanne Verra, 
    • Vanessa Buehler, 
    • Maja Friedemann, 
    • Marlene Hecht, 
    • Rayyan S. Mobarak, 
    • Ralitsa Karakasheva, 
    • Markus R. Tünte, 
    • Siu Kit Yeung, 
    • R. Shayna Rosenbaum, 
    • Žan Lep, 
    • Yuki Yamada, 
    • Sa-kiera Tiarra Jolynn Hudson, 
    • Lucía Macchia, 
    • Irina Soboleva, 
    • Eugen Dimant, 
    • Sandra J. Geiger, 
    • Hannes Jarke, 
    • Tobias Wingen, 
    • Jana Berkessel, 
    • Silvana Mareva, 
    • Lucy McGill, 
    • Francesca Papa, 
    • Bojana Većkalov, 
    • Zeina Afif, 
    • Eike K. Buabang, 
    • Marna Landman, 
    • Felice Tavera, 
    • Jack L. Andrews, 
    • Aslı Bursalıoğlu, 
    • Zorana Zupan, 
    • Lisa Wagner, 
    • Joaquin Navajas, 
    • Marek Vranka, 
    • David Kasdan, 
    • Patricia Chen, 
    • Kathleen R. Hudson, 
    • Lindsay M. Novak, 
    • Paul Teas, 
    • Nikolay R. Rachev, 
    • Matteo M. Galizzi, 
    • Katherine L. Milkman, 
    • Marija Petrović, 
    • Jay J. Van Bavel
    • Robb Willer

    Nature, 2023

    View Article View abstract

    Scientific evidence regularly guides policy decisions, with behavioural science increasingly part of this process. In April 2020, an influential paper proposed 19 policy recommendations (‘claims’) detailing how evidence from behavioural science could contribute to efforts to reduce impacts and end the COVID-19 pandemic. Here we assess 747 pandemic-related research articles that empirically investigated those claims. We report the scale of evidence and whether evidence supports them to indicate applicability for policymaking. Two independent teams, involving 72 reviewers, found evidence for 18 of 19 claims, with both teams finding evidence supporting 16 (89%) of those 18 claims. The strongest evidence supported claims that anticipated culture, polarization and misinformation would be associated with policy effectiveness. Claims suggesting trusted leaders and positive social norms increased adherence to behavioural interventions also had strong empirical support, as did appealing to social consensus or bipartisan agreement. Targeted language in messaging yielded mixed effects and there were no effects for highlighting individual benefits or protecting others. No available evidence existed to assess any distinct differences in effects between using the terms ‘physical distancing’ and ‘social distancing’. Analysis of 463 papers containing data showed generally large samples; 418 involved human participants with a mean of 16,848 (median of 1,699). That statistical power underscored improved suitability of behavioural science research for informing policy decisions. Furthermore, by implementing a standardized approach to evidence selection and synthesis, we amplify broader implications for advancing scientific evidence in policy formulation and prioritization.

    Date Posted

    Dec 13, 2023

    Tags

  • Journal Article

    Testing the Effect of Information on Discerning the Veracity of News in Real Time

    Journal of Experimental Political Science, 2023

    View Article View abstract

    Despite broad adoption of digital media literacy interventions that provide online users with more information when consuming news, relatively little is known about the effect of this additional information on the discernment of news veracity in real time. Gaining a comprehensive understanding of how information impacts discernment of news veracity has been hindered by challenges of external and ecological validity. Using a series of pre-registered experiments, we measure this effect in real time. Access to the full article relative to solely the headline/lede and access to source information improves an individual's ability to correctly discern the veracity of news. We also find that encouraging individuals to search online increases belief in both false/misleading and true news. Taken together, we provide a generalizable method for measuring the effect of information on news discernment, as well as crucial evidence for practitioners developing strategies for improving the public's digital media literacy.

    Date Posted

    Nov 08, 2023

    Tags

  • Journal Article

    Replicating the Effects of Facebook Deactivation in an Ethnically Polarized Setting

    Research & Politics, 2023

    View Article View abstract

    The question of how social media usage impacts societal polarization continues to generate great interest among both the research community and broader public. Nevertheless, there are still very few rigorous empirical studies of the causal impact of social media usage on polarization. To explore this question, we replicate the only published study to date that tests the effects of social media cessation on interethnic attitudes (Asimovic et al., 2021). In a study situated in Bosnia and Herzegovina, the authors found that deactivating from Facebook for a week around genocide commemoration in Bosnia and Herzegovina had a negative effect on users’ attitudes toward ethnic outgroups, with the negative effect driven by users with more ethnically homogenous offline networks. Does this finding extend to other settings? In a pre-registered replication study, we implement the same research design in a different ethnically polarized setting: Cyprus. We are not able to replicate the main effect found in Asimovic et al. (2021): in Cyprus, we cannot reject the null hypothesis of no effect. We do, however, find a significant interaction between the heterogeneity of users’ offline networks and the deactivation treatment within our 2021 subsample, consistent with the pattern from Bosnia and Herzegovina. We also find support for recent findings (Allcott et al., 2020; Asimovic et al., 2021) that Facebook deactivation leads to a reduction in anxiety levels and suggestive evidence of a reduction in knowledge of current news, though the latter is again limited to our 2021 subsample.

    Date Posted

    Oct 18, 2023

  • Working Paper

    Concept-Guided Chain-of-Thought Prompting for Pairwise Comparison Scaling of Texts with Large Language Models

    Working Paper, October 2023

    View Article View abstract

    Existing text scaling methods often require a large corpus, struggle with short texts, or require labeled data. We develop a text scaling method that leverages the pattern recognition capabilities of generative large language models (LLMs). Specifically, we propose concept-guided chain-of-thought (CGCoT), which uses prompts designed to summarize ideas and identify target parties in texts to generate concept-specific breakdowns, in many ways similar to guidance for human coder content analysis. CGCoT effectively shifts pairwise text comparisons from a reasoning problem to a pattern recognition problem. We then pairwise compare concept-specific breakdowns using an LLM. We use the results of these pairwise comparisons to estimate a scale using the Bradley-Terry model. We use this approach to scale affective speech on Twitter. Our measures correlate more strongly with human judgments than alternative approaches like Wordfish. Besides a small set of pilot data to develop the CGCoT prompts, our measures require no additional labeled data and produce binary predictions comparable to a RoBERTa-Large model fine-tuned on thousands of human-labeled tweets. We demonstrate how combining substantive knowledge with LLMs can create state-of-the-art measures of abstract concepts.

    Date Posted

    Oct 18, 2023

  • Working Paper

    Large Language Models Can Be Used to Estimate the Latent Positions of Politicians

    Working Paper, September 2023

    View Article View abstract

    Existing approaches to estimating politicians' latent positions along specific dimensions often fail when relevant data is limited. We leverage the embedded knowledge in generative large language models (LLMs) to address this challenge and measure lawmakers' positions along specific political or policy dimensions. We prompt an instruction/dialogue-tuned LLM to pairwise compare lawmakers and then scale the resulting graph using the Bradley-Terry model. We estimate novel measures of U.S. senators' positions on liberal-conservative ideology, gun control, and abortion. Our liberal-conservative scale, used to validate LLM-driven scaling, strongly correlates with existing measures and offsets interpretive gaps, suggesting LLMs synthesize relevant data from internet and digitized media rather than memorizing existing measures. Our gun control and abortion measures -- the first of their kind -- differ from the liberal-conservative scale in face-valid ways and predict interest group ratings and legislator votes better than ideology alone. Our findings suggest LLMs hold promise for solving complex social science measurement problems.

  • Working Paper

    Reducing Prejudice and Support for Religious Nationalism Through Conversations on WhatsApp

    Working Paper, September 2023

    View Article View abstract

    Can a series of online conversations with a marginalized outgroup member improve majority group members’ attitudes about that outgroup? While the intergroup contact literature provides (mixed) insights about the effects of extended interactions between groups, less is known about how relatively short and casual interactions may play out in highly polarized settings. In an experiment in India, I bring together Hindus and Muslims for five days of conversations on WhatsApp, a popular messaging platform, to investigate the extent to which chatting with a Muslim about randomly assigned discussion prompts affects Hindus’ perceptions of Muslims and approval for mainstream religious nationalist statements. I find that intergroup conversations greatly reduce prejudice against Muslims and approval for religious nationalist statements at least two to three weeks post-conversation. Intergroup conversations about non-political issues are especially effective at reducing prejudice, while conversations about politics substantially decrease support for religious nationalism. I further show how political conversations and non-political conversations affect attitudes through distinct mechanisms.

    Area of Study

    Date Posted

    Sep 09, 2023

    Tags

  • Journal Article

    Like-Minded Sources On Facebook Are Prevalent But Not Polarizing

    • Brendan Nyhan, 
    • Jaime Settle, 
    • Emily Thorson, 
    • Magdalena Wojcieszak
    • Pablo Barberá
    • Annie Y. Chen, 
    • Hunt Alcott, 
    • Taylor Brown, 
    • Adriana Crespo-Tenorio, 
    • Drew Dimmery, 
    • Deen Freelon, 
    • Matthew Gentzkow, 
    • Sandra González-Bailón
    • Andrew M. Guess
    • Edward Kennedy, 
    • Young Mie Kim, 
    • David Lazer, 
    • Neil Malhotra, 
    • Devra Moehler, 
    • Jennifer Pan, 
    • Daniel Robert Thomas, 
    • Rebekah Tromble, 
    • Carlos Velasco Rivera, 
    • Arjun Wilkins, 
    • Beixian Xiong, 
    • Chad Kiewiet De Jong, 
    • Annie Franco, 
    • Winter Mason, 
    • Natalie Jomini Stroud, 
    • Joshua A. Tucker

    Nature, 2023

    View Article View abstract

    Many critics raise concerns about the prevalence of ‘echo chambers’ on social media and their potential role in increasing political polarization. However, the lack of available data and the challenges of conducting large-scale field experiments have made it difficult to assess the scope of the problem1,2. Here we present data from 2020 for the entire population of active adult Facebook users in the USA showing that content from ‘like-minded’ sources constitutes the majority of what people see on the platform, although political information and news represent only a small fraction of these exposures. To evaluate a potential response to concerns about the effects of echo chambers, we conducted a multi-wave field experiment on Facebook among 23,377 users for whom we reduced exposure to content from like-minded sources during the 2020 US presidential election by about one-third. We found that the intervention increased their exposure to content from cross-cutting sources and decreased exposure to uncivil language, but had no measurable effects on eight preregistered attitudinal measures such as affective polarization, ideological extremity, candidate evaluations and belief in false claims. These precisely estimated results suggest that although exposure to content from like-minded sources on social media is common, reducing its prevalence during the 2020 US presidential election did not correspondingly reduce polarization in beliefs or attitudes.

  • Journal Article

    How Do Social Media Feed Algorithms Affect Attitudes and Behavior in an Election Campaign?

    • Andrew M. Guess
    • Neil Malhotra, 
    • Jennifer Pan, 
    • Pablo Barberá
    • Hunt Alcott, 
    • Taylor Brown, 
    • Adriana Crespo-Tenorio, 
    • Drew Dimmery, 
    • Deen Freelon, 
    • Matthew Gentzkow, 
    • Sandra González-Bailón
    • Edward Kennedy, 
    • Young Mie Kim, 
    • David Lazer, 
    • Devra Moehler, 
    • Brendan Nyhan, 
    • Jaime Settle, 
    • Calos Velasco-Rivera, 
    • Daniel Robert Thomas, 
    • Emily Thorson, 
    • Rebekah Tromble, 
    • Beixian Xiong, 
    • Chad Kiewiet De Jong, 
    • Annie Franco, 
    • Winter Mason, 
    • Natalie Jomini Stroud, 
    • Joshua A. Tucker

    Science, 2023

    View Article View abstract

    We investigated the effects of Facebook’s and Instagram’s feed algorithms during the 2020 US election. We assigned a sample of consenting users to reverse-chronologically-ordered feeds instead of the default algorithms. Moving users out of algorithmic feeds substantially decreased the time they spent on the platforms and their activity. The chronological feed also affected exposure to content: The amount of political and untrustworthy content they saw increased on both platforms, the amount of content classified as uncivil or containing slur words they saw decreased on Facebook, and the amount of content from moderate friends and sources with ideologically mixed audiences they saw increased on Facebook. Despite these substantial changes in users’ on-platform experience, the chronological feed did not significantly alter levels of issue polarization, affective polarization, political knowledge, or other key attitudes during the 3-month study period.

  • Journal Article

    Reshares on Social Media Amplify Political News But Do Not Detectably Affect Beliefs or Opinions

    • Andrew M. Guess
    • Neil Malhotra, 
    • Jennifer Pan, 
    • Pablo Barberá
    • Hunt Alcott, 
    • Taylor Brown, 
    • Adriana Crespo-Tenorio, 
    • Drew Dimmery, 
    • Deen Freelon, 
    • Matthew Gentzkow, 
    • Sandra González-Bailón
    • Edward Kennedy, 
    • Young Mie Kim, 
    • David Lazer, 
    • Devra Moehler, 
    • Brendan Nyhan, 
    • Carlos Velasco Rivera, 
    • Jaime Settle, 
    • Daniel Robert Thomas, 
    • Emily Thorson, 
    • Rebekah Tromble, 
    • Arjun Wilkins, 
    • Magdalena Wojcieszak
    • Beixian Xiong, 
    • Chad Kiewiet De Jong, 
    • Annie Franco, 
    • Winter Mason, 
    • Natalie Jomini Stroud, 
    • Joshua A. Tucker

    Science, 2023

    View Article View abstract

    We studied the effects of exposure to reshared content on Facebook during the 2020 US election by assigning a random set of consenting, US-based users to feeds that did not contain any reshares over a 3-month period. We find that removing reshared content substantially decreases the amount of political news, including content from untrustworthy sources, to which users are exposed; decreases overall clicks and reactions; and reduces partisan news clicks. Further, we observe that removing reshared content produces clear decreases in news knowledge within the sample, although there is some uncertainty about how this would generalize to all users. Contrary to expectations, the treatment does not significantly affect political polarization or any measure of individual-level political attitudes.

  • Journal Article

    Asymmetric Ideological Segregation In Exposure To Political News on Facebook

    • Sandra González-Bailón
    • David Lazer, 
    • Pablo Barberá
    • Meiqing Zhang, 
    • Hunt Alcott, 
    • Taylor Brown, 
    • Adriana Crespo-Tenorio, 
    • Deen Freelon, 
    • Matthew Gentzkow, 
    • Andrew M. Guess
    • Shanto Iyengar, 
    • Young Mie Kim, 
    • Neil Malhotra, 
    • Devra Moehler, 
    • Brendan Nyhan, 
    • Jennifer Pan, 
    • Caros Velasco Rivera, 
    • Jaime Settle, 
    • Emily Thorson, 
    • Rebekah Tromble, 
    • Arjun Wilkins, 
    • Magdalena Wojcieszak
    • Chad Kiewiet De Jong, 
    • Annie Franco, 
    • Winter Mason, 
    • Joshua A. Tucker
    • Natalie Jomini Stroud

    Science, 2023

    View Article View abstract

    Does Facebook enable ideological segregation in political news consumption? We analyzed exposure to news during the US 2020 election using aggregated data for 208 million US Facebook users. We compared the inventory of all political news that users could have seen in their feeds with the information that they saw (after algorithmic curation) and the information with which they engaged. We show that (i) ideological segregation is high and increases as we shift from potential exposure to actual exposure to engagement; (ii) there is an asymmetry between conservative and liberal audiences, with a substantial corner of the news ecosystem consumed exclusively by conservatives; and (iii) most misinformation, as identified by Meta’s Third-Party Fact-Checking Program, exists within this homogeneously conservative corner, which has no equivalent on the liberal side. Sources favored by conservative audiences were more prevalent on Facebook’s news ecosystem than those favored by liberals.

  • Journal Article

    Measuring the Ideology of Audiences for Web Links and Domains Using Differentially Private Engagement Data

    Proceedings of the International AAAI Conference on Web and Social Media, 2023

    View Article View abstract

    Area of Study

    Date Posted

    Jun 02, 2023