Academic Research

CSMaP faculty, postdoctoral fellows, and students publish rigorous, peer-reviewed research in top academic journals and post working papers sharing ongoing work.

Search or Filter

  • Journal Article

    Understanding Latino Political Engagement and Activity on Social Media

    Political Research Quarterly, 2025

    View Article View abstract

    Social media is used by millions of Americans to access news and politics. Yet there are no studies, to date, examining whether these behaviors systematically vary for those whose political incorporation process is distinct from those in the majority. We fill this void by examining how Latino online political activity compares to that of white Americans and the role of language in Latinos’ online political engagement. We hypothesize that Latino online political activity is comparable to whites. Moreover, given media reports suggesting that greater quantities of political misinformation are circulating on Spanish versus English-language social media, we expect reliance on Spanish-language social media for news predicts beliefs in inaccurate political narratives. Our survey findings, which we believe to be the largest original survey of the online political activity of Latinos and whites, reveal support for these expectations. Latino social media political activity, as measured by sharing/viewing news, talking about politics, and following politicians, is comparable to whites, both in self-reported and digital trace data. Latinos also turned to social media for news about COVID-19 more often than did whites. Finally, Latinos relying on Spanish-language social media usage for news predicts beliefs in election fraud in the 2020 U.S. Presidential election.

  • Journal Article

    The Diffusion and Reach of (Mis)Information on Facebook During the U.S. 2020 Election

    Sociological Science, 2024

    View Article View abstract

    Social media creates the possibility for rapid, viral spread of content, but how many posts actually reach millions? And is misinformation special in how it propagates? We answer these questions by analyzing the virality of and exposure to information on Facebook during the U.S. 2020 presidential election. We examine the diffusion trees of the approximately 1 B posts that were re-shared at least once by U.S.-based adults from July 1, 2020, to February 1, 2021. We differentiate misinformation from non-misinformation posts to show that (1) misinformation diffused more slowly, relying on a small number of active users that spread misinformation via long chains of peer-to-peer diffusion that reached millions; non-misinformation spread primarily through one-to-many affordances (mainly, Pages); (2) the relative importance of peer-to-peer spread for misinformation was likely due to an enforcement gap in content moderation policies designed to target mostly Pages and Groups; and (3) periods of aggressive content moderation proximate to the election coincide with dramatic drops in the spread and reach of misinformation and (to a lesser extent) political content.

  • Journal Article

    How Reliance on Spanish-Language Social Media Predicts Beliefs in False Political Narratives Amongst Latinos

    PNAS Nexus, 2024

    View Article View abstract

    False political narratives are nearly inescapable on social media in the United States. They are a particularly acute problem for Latinos, and especially for those who rely on Spanish-language social media for news and information. Studies have shown that Latinos are vulnerable to misinformation because they rely more heavily on social media and messaging platforms than non-Hispanic whites. Moreover, fact-checking algorithms are not as robust in Spanish as they are in English, and social media platforms put far more effort into combating misinformation on English-language media than Spanish-language media, which compounds the likelihood of being exposed to misinformation. As a result, we expect that Latinos who use Spanish-language social media to be more likely to believe in false political narratives when compared with Latinos who primarily rely on English-language social media for news. To test this expectation, we fielded the largest online survey to date of social media usage and belief in political misinformation of Latinos. Our study, fielded in the months leading up to and following the 2022 midterm elections, examines a variety of false political narratives that were circulating in both Spanish and English on social media. We find that social media reliance for news predicts one’s belief in false political stories, and that Latinos who use Spanish-language social media have a higher probability of believing in false political narratives, compared with Latinos using English-language social media.

  • Journal Article

    News Sharing on Social Media: Mapping the Ideology of News Media, Politicians, and the Mass Public

    Political Analysis, 2024

    View Article View abstract

    This article examines the information sharing behavior of U.S. politicians and the mass public by mapping the ideological sharing space of political news on social media. As data, we use the near-universal currency of online information exchange: web links. We introduce a methodological approach and software to unify the measurement of ideology across social media platforms by using sharing data to jointly estimate the ideology of news media organizations, politicians, and the mass public. Empirically, we show that (1) politicians who share ideologically polarized content share, by far, the most political news and commentary and (2) that the less competitive elections are, the more likely politicians are to share polarized information. These results demonstrate that news and commentary shared by politicians come from a highly unrepresentative set of ideologically extreme legislators and that decreases in election pressures (e.g., by gerrymandering) may encourage polarized sharing behavior.

  • Journal Article

    Measuring Receptivity to Misinformation at Scale on a Social Media Platform

    PNAS Nexus, 2024

    View Article View abstract

    Measuring the impact of online misinformation is challenging. Traditional measures, such as user views or shares on social media, are incomplete because not everyone who is exposed to misinformation is equally likely to believe it. To address this issue, we developed a method that combines survey data with observational Twitter data to probabilistically estimate the number of users both exposed to and likely to believe a specific news story. As a proof of concept, we applied this method to 139 viral news articles and find that although false news reaches an audience with diverse political views, users who are both exposed and receptive to believing false news tend to have more extreme ideologies. These receptive users are also more likely to encounter misinformation earlier than those who are unlikely to believe it. This mismatch between overall user exposure and receptive user exposure underscores the limitation of relying solely on exposure or interaction data to measure the impact of misinformation, as well as the challenge of implementing effective interventions. To demonstrate how our approach can address this challenge, we then conducted data-driven simulations of common interventions used by social media platforms. We find that these interventions are only modestly effective at reducing exposure among users likely to believe misinformation, and their effectiveness quickly diminishes unless implemented soon after misinformation’s initial spread. Our paper provides a more precise estimate of misinformation’s impact by focusing on the exposure of users likely to believe it, offering insights for effective mitigation strategies on social media.

  • Working Paper

    Misinformation Exposure Beyond Traditional Feeds: Evidence from a WhatsApp Deactivation Experiment in Brazil

    Working Paper, May 2024

    View Article View abstract

    In most advanced democracies, concerns about the spread of misinformation are typically associated with feed-based social media platforms like Twitter and Facebook. These platforms also account for the vast majority of research on the topic. However, in most of the world, particularly in Global South countries, misinformation often reaches citizens through social media messaging apps, particularly WhatsApp. To fill the resulting gap in the literature, we conducted a multimedia deactivation experiment to test the impact of reducing exposure to potential sources of misinformation on WhatsApp during the weeks leading up to the 2022 Presidential election in Brazil. We find that this intervention significantly reduced participants’ exposure to false rumors circulating widely during the election. However, consistent with theories of mass media minimal effects, a short-term reduction in exposure to misinformation ahead of the election did not lead to significant changes in belief accuracy, political polarization, or well-being.

  • Journal Article

    The Effects of Facebook and Instagram on the 2020 Election: A Deactivation Experiment

    • Hunt Alcott, 
    • Matthew Gentzkow, 
    • Winter Mason, 
    • Arjun Wilkins, 
    • Pablo Barberá
    • Taylor Brown, 
    • Juan Carlos Cisneros, 
    • Adriana Crespo-Tenorio, 
    • Drew Dimmery, 
    • Deen Freelon, 
    • Sandra González-Bailón
    • Andrew M. Guess
    • Young Mie Kim, 
    • David Lazer, 
    • Neil Malhotra, 
    • Devra Moehler, 
    • Sameer Nair-Desai, 
    • Houda Nait El Barj, 
    • Brendan Nyhan, 
    • Ana Carolina Paixao de Queiroz, 
    • Jennifer Pan, 
    • Jaime Settle, 
    • Emily Thorson, 
    • Rebekah Tromble, 
    • Carlos Velasco Rivera, 
    • Benjamin Wittenbrink, 
    • Magdalena Wojcieszak
    • Saam Zahedian, 
    • Annie Franco, 
    • Chad Kiewiet De Jong, 
    • Natalie Jomini Stroud, 
    • Joshua A. Tucker

    Proceedings of the National Academy of Sciences, 2024

    View Article View abstract

    We study the effect of Facebook and Instagram access on political beliefs, attitudes, and behavior by randomizing a subset of 19,857 Facebook users and 15,585 Instagram users to deactivate their accounts for 6 wk before the 2020 U.S. election. We report four key findings. First, both Facebook and Instagram deactivation reduced an index of political participation (driven mainly by reduced participation online). Second, Facebook deactivation had no significant effect on an index of knowledge, but secondary analyses suggest that it reduced knowledge of general news while possibly also decreasing belief in misinformation circulating online. Third, Facebook deactivation may have reduced self-reported net votes for Trump, though this effect does not meet our preregistered significance threshold. Finally, the effects of both Facebook and Instagram deactivation on affective and issue polarization, perceived legitimacy of the election, candidate favorability, and voter turnout were all precisely estimated and close to zero.

  • Journal Article

    Estimating the Ideology of Political YouTube Videos

    Political Analysis, 2024

    View Article View abstract

    We present a method for estimating the ideology of political YouTube videos. As online media increasingly influences how people engage with politics, so does the importance of quantifying the ideology of such media for research. The subfield of estimating ideology as a latent variable has often focused on traditional actors such as legislators, while more recent work has used social media data to estimate the ideology of ordinary users, political elites, and media sources. We build on this work by developing a method to estimate the ideologies of YouTube videos, an important subset of media, based on their accompanying text metadata. First, we take Reddit posts linking to YouTube videos and use correspondence analysis to place those videos in an ideological space. We then train a text-based model with those estimated ideologies as training labels, enabling us to estimate the ideologies of videos not posted on Reddit. These predicted ideologies are then validated against human labels. Finally, we demonstrate the utility of this method by applying it to the watch histories of survey respondents with self-identified ideologies to evaluate the prevalence of echo chambers on YouTube. Our approach gives video-level scores based only on supplied text metadata, is scalable, and can be easily adjusted to account for changes in the ideological climate. This method could also be generalized to estimate the ideology of other items referenced or posted on Reddit.

    Date Posted

    Feb 13, 2024

  • Journal Article

    Testing the Effect of Information on Discerning the Veracity of News in Real Time

    Journal of Experimental Political Science, 2023

    View Article View abstract

    Despite broad adoption of digital media literacy interventions that provide online users with more information when consuming news, relatively little is known about the effect of this additional information on the discernment of news veracity in real time. Gaining a comprehensive understanding of how information impacts discernment of news veracity has been hindered by challenges of external and ecological validity. Using a series of pre-registered experiments, we measure this effect in real time. Access to the full article relative to solely the headline/lede and access to source information improves an individual's ability to correctly discern the veracity of news. We also find that encouraging individuals to search online increases belief in both false/misleading and true news. Taken together, we provide a generalizable method for measuring the effect of information on news discernment, as well as crucial evidence for practitioners developing strategies for improving the public's digital media literacy.

    Date Posted

    Nov 08, 2023

    Tags

  • Journal Article

    Replicating the Effects of Facebook Deactivation in an Ethnically Polarized Setting

    Research & Politics, 2023

    View Article View abstract

    The question of how social media usage impacts societal polarization continues to generate great interest among both the research community and broader public. Nevertheless, there are still very few rigorous empirical studies of the causal impact of social media usage on polarization. To explore this question, we replicate the only published study to date that tests the effects of social media cessation on interethnic attitudes (Asimovic et al., 2021). In a study situated in Bosnia and Herzegovina, the authors found that deactivating from Facebook for a week around genocide commemoration in Bosnia and Herzegovina had a negative effect on users’ attitudes toward ethnic outgroups, with the negative effect driven by users with more ethnically homogenous offline networks. Does this finding extend to other settings? In a pre-registered replication study, we implement the same research design in a different ethnically polarized setting: Cyprus. We are not able to replicate the main effect found in Asimovic et al. (2021): in Cyprus, we cannot reject the null hypothesis of no effect. We do, however, find a significant interaction between the heterogeneity of users’ offline networks and the deactivation treatment within our 2021 subsample, consistent with the pattern from Bosnia and Herzegovina. We also find support for recent findings (Allcott et al., 2020; Asimovic et al., 2021) that Facebook deactivation leads to a reduction in anxiety levels and suggestive evidence of a reduction in knowledge of current news, though the latter is again limited to our 2021 subsample.

    Date Posted

    Oct 18, 2023

  • Journal Article

    Like-Minded Sources On Facebook Are Prevalent But Not Polarizing

    • Brendan Nyhan, 
    • Jaime Settle, 
    • Emily Thorson, 
    • Magdalena Wojcieszak
    • Pablo Barberá
    • Annie Y. Chen, 
    • Hunt Alcott, 
    • Taylor Brown, 
    • Adriana Crespo-Tenorio, 
    • Drew Dimmery, 
    • Deen Freelon, 
    • Matthew Gentzkow, 
    • Sandra González-Bailón
    • Andrew M. Guess
    • Edward Kennedy, 
    • Young Mie Kim, 
    • David Lazer, 
    • Neil Malhotra, 
    • Devra Moehler, 
    • Jennifer Pan, 
    • Daniel Robert Thomas, 
    • Rebekah Tromble, 
    • Carlos Velasco Rivera, 
    • Arjun Wilkins, 
    • Beixian Xiong, 
    • Chad Kiewiet De Jong, 
    • Annie Franco, 
    • Winter Mason, 
    • Natalie Jomini Stroud, 
    • Joshua A. Tucker

    Nature, 2023

    View Article View abstract

    Many critics raise concerns about the prevalence of ‘echo chambers’ on social media and their potential role in increasing political polarization. However, the lack of available data and the challenges of conducting large-scale field experiments have made it difficult to assess the scope of the problem1,2. Here we present data from 2020 for the entire population of active adult Facebook users in the USA showing that content from ‘like-minded’ sources constitutes the majority of what people see on the platform, although political information and news represent only a small fraction of these exposures. To evaluate a potential response to concerns about the effects of echo chambers, we conducted a multi-wave field experiment on Facebook among 23,377 users for whom we reduced exposure to content from like-minded sources during the 2020 US presidential election by about one-third. We found that the intervention increased their exposure to content from cross-cutting sources and decreased exposure to uncivil language, but had no measurable effects on eight preregistered attitudinal measures such as affective polarization, ideological extremity, candidate evaluations and belief in false claims. These precisely estimated results suggest that although exposure to content from like-minded sources on social media is common, reducing its prevalence during the 2020 US presidential election did not correspondingly reduce polarization in beliefs or attitudes.

  • Journal Article

    How Do Social Media Feed Algorithms Affect Attitudes and Behavior in an Election Campaign?

    • Andrew M. Guess
    • Neil Malhotra, 
    • Jennifer Pan, 
    • Pablo Barberá
    • Hunt Alcott, 
    • Taylor Brown, 
    • Adriana Crespo-Tenorio, 
    • Drew Dimmery, 
    • Deen Freelon, 
    • Matthew Gentzkow, 
    • Sandra González-Bailón
    • Edward Kennedy, 
    • Young Mie Kim, 
    • David Lazer, 
    • Devra Moehler, 
    • Brendan Nyhan, 
    • Jaime Settle, 
    • Calos Velasco-Rivera, 
    • Daniel Robert Thomas, 
    • Emily Thorson, 
    • Rebekah Tromble, 
    • Beixian Xiong, 
    • Chad Kiewiet De Jong, 
    • Annie Franco, 
    • Winter Mason, 
    • Natalie Jomini Stroud, 
    • Joshua A. Tucker

    Science, 2023

    View Article View abstract

    We investigated the effects of Facebook’s and Instagram’s feed algorithms during the 2020 US election. We assigned a sample of consenting users to reverse-chronologically-ordered feeds instead of the default algorithms. Moving users out of algorithmic feeds substantially decreased the time they spent on the platforms and their activity. The chronological feed also affected exposure to content: The amount of political and untrustworthy content they saw increased on both platforms, the amount of content classified as uncivil or containing slur words they saw decreased on Facebook, and the amount of content from moderate friends and sources with ideologically mixed audiences they saw increased on Facebook. Despite these substantial changes in users’ on-platform experience, the chronological feed did not significantly alter levels of issue polarization, affective polarization, political knowledge, or other key attitudes during the 3-month study period.

  • Journal Article

    Reshares on Social Media Amplify Political News But Do Not Detectably Affect Beliefs or Opinions

    • Andrew M. Guess
    • Neil Malhotra, 
    • Jennifer Pan, 
    • Pablo Barberá
    • Hunt Alcott, 
    • Taylor Brown, 
    • Adriana Crespo-Tenorio, 
    • Drew Dimmery, 
    • Deen Freelon, 
    • Matthew Gentzkow, 
    • Sandra González-Bailón
    • Edward Kennedy, 
    • Young Mie Kim, 
    • David Lazer, 
    • Devra Moehler, 
    • Brendan Nyhan, 
    • Carlos Velasco Rivera, 
    • Jaime Settle, 
    • Daniel Robert Thomas, 
    • Emily Thorson, 
    • Rebekah Tromble, 
    • Arjun Wilkins, 
    • Magdalena Wojcieszak
    • Beixian Xiong, 
    • Chad Kiewiet De Jong, 
    • Annie Franco, 
    • Winter Mason, 
    • Natalie Jomini Stroud, 
    • Joshua A. Tucker

    Science, 2023

    View Article View abstract

    We studied the effects of exposure to reshared content on Facebook during the 2020 US election by assigning a random set of consenting, US-based users to feeds that did not contain any reshares over a 3-month period. We find that removing reshared content substantially decreases the amount of political news, including content from untrustworthy sources, to which users are exposed; decreases overall clicks and reactions; and reduces partisan news clicks. Further, we observe that removing reshared content produces clear decreases in news knowledge within the sample, although there is some uncertainty about how this would generalize to all users. Contrary to expectations, the treatment does not significantly affect political polarization or any measure of individual-level political attitudes.

  • Journal Article

    Asymmetric Ideological Segregation In Exposure To Political News on Facebook

    • Sandra González-Bailón
    • David Lazer, 
    • Pablo Barberá
    • Meiqing Zhang, 
    • Hunt Alcott, 
    • Taylor Brown, 
    • Adriana Crespo-Tenorio, 
    • Deen Freelon, 
    • Matthew Gentzkow, 
    • Andrew M. Guess
    • Shanto Iyengar, 
    • Young Mie Kim, 
    • Neil Malhotra, 
    • Devra Moehler, 
    • Brendan Nyhan, 
    • Jennifer Pan, 
    • Caros Velasco Rivera, 
    • Jaime Settle, 
    • Emily Thorson, 
    • Rebekah Tromble, 
    • Arjun Wilkins, 
    • Magdalena Wojcieszak
    • Chad Kiewiet De Jong, 
    • Annie Franco, 
    • Winter Mason, 
    • Joshua A. Tucker
    • Natalie Jomini Stroud

    Science, 2023

    View Article View abstract

    Does Facebook enable ideological segregation in political news consumption? We analyzed exposure to news during the US 2020 election using aggregated data for 208 million US Facebook users. We compared the inventory of all political news that users could have seen in their feeds with the information that they saw (after algorithmic curation) and the information with which they engaged. We show that (i) ideological segregation is high and increases as we shift from potential exposure to actual exposure to engagement; (ii) there is an asymmetry between conservative and liberal audiences, with a substantial corner of the news ecosystem consumed exclusively by conservatives; and (iii) most misinformation, as identified by Meta’s Third-Party Fact-Checking Program, exists within this homogeneously conservative corner, which has no equivalent on the liberal side. Sources favored by conservative audiences were more prevalent on Facebook’s news ecosystem than those favored by liberals.

  • Journal Article

    Election Fraud, YouTube, and Public Perception of the Legitimacy of President Biden

    Journal of Online Trust and Safety, 2022

    View Article View abstract

    Skepticism about the outcome of the 2020 presidential election in the United States led to a historic attack on the Capitol on January 6th, 2021 and represents one of the greatest challenges to America's democratic institutions in over a century. Narratives of fraud and conspiracy theories proliferated over the fall of 2020, finding fertile ground across online social networks, although little is know about the extent and drivers of this spread. In this article, we show that users who were more skeptical of the election's legitimacy were more likely to be recommended content that featured narratives about the legitimacy of the election. Our findings underscore the tension between an "effective" recommendation system that provides users with the content they want, and a dangerous mechanism by which misinformation, disinformation, and conspiracies can find their way to those most likely to believe them.

    Date Posted

    Sep 01, 2022

  • Journal Article

    What We Learned About The Gateway Pundit from its Own Web Traffic Data

    Workshop Proceedings of the 16th International AAAI Conference on Web and Social Media, 2022

    View Article View abstract

    To mitigate the spread of false news, researchers need to understand who visits low-quality news sites, what brings people to those sites, and what content they prefer to consume. Due to challenges in observing most direct website traffic, existing research primarily relies on alternative data sources, such as engagement signals from social media posts. However, such signals are at best only proxies for actual website visits. During an audit of far-right news websites, we discovered that The Gateway Pundit (TGP) has made its web traffic data publicly available, giving us a rare opportunity to understand what news pages people actually visit. We collected 68 million web traffic visits to the site over a one-month period and analyzed how people consume news via multiple features. Our referral analysis shows that search engines and social media platforms are the main drivers of traffic; our geo-location analysis reveals that TGP is more popular in counties where more people voted for Trump in 2020. In terms of content, topics related to 2020 US presidential election and 2021 US capital riot have the highest average number of visits. We also use these data to quantify to what degree social media engagement signals correlate with actual web visit counts. To do so, we collect Facebook and Twitter posts with URLs from TGP during the same time period. We show that all engagement signals positively correlate with web visit counts, but with varying correlation strengths. For example, total interaction on Facebook correlates better than Twitter retweet count. Our insights can also help researchers choose the right metrics when they measure the impact of news URLs on social media.

    Date Posted

    Jun 01, 2022

  • Working Paper

    Echo Chambers, Rabbit Holes, and Algorithmic Bias: How YouTube Recommends Content to Real Users

    Working Paper, May 2022

    View Article View abstract

    To what extent does the YouTube recommendation algorithm push users into echo chambers, ideologically biased content, or rabbit holes? Despite growing popular concern, recent work suggests that the recommendation algorithm is not pushing users into these echo chambers. However, existing research relies heavily on the use of anonymous data collection that does not account for the personalized nature of the recommendation algorithm. We asked a sample of real users to install a browser extension that downloaded the list of videos they were recommended. We instructed these users to start on an assigned video and then click through 20 sets of recommendations, capturing what they were being shown in real time as they used the platform logged into their real accounts. Using a novel method to estimate the ideology of a YouTube video, we demonstrate that the YouTube recommendation algorithm does, in fact, push real users into mild ideological echo chambers where, by the end of the data collection task, liberals and conservatives received different distributions of recommendations from each other, though this difference is small. While we find evidence that this difference increases the longer the user followed the recommendation algorithm, we do not find evidence that many go down `rabbit holes' that lead them to ideologically extreme content. Finally, we find that YouTube pushes all users, regardless of ideology, towards moderately conservative and an increasingly narrow range of ideological content the longer they follow YouTube's recommendations.

    Date Posted

    May 11, 2022

  • Journal Article

    News Credibility Labels Have Limited Average Effects on News Diet Quality and Fail to Reduce Misperceptions

    Science Advances, 2022

    View Article View abstract

    As the primary arena for viral misinformation shifts toward transnational threats, the search continues for scalable countermeasures compatible with principles of transparency and free expression. We conducted a randomized field experiment evaluating the impact of source credibility labels embedded in users’ social feeds and search results pages. By combining representative surveys (n = 3337) and digital trace data (n = 968) from a subset of respondents, we provide a rare ecologically valid test of such an intervention on both attitudes and behavior. On average across the sample, we are unable to detect changes in real-world consumption of news from low-quality sources after 3 weeks. We can also rule out small effects on perceived accuracy of popular misinformation spread about the Black Lives Matter movement and coronavirus disease 2019. However, we present suggestive evidence of a substantively meaningful increase in news diet quality among the heaviest consumers of misinformation. We discuss the implications of our findings for scholars and practitioners.

    Date Posted

    May 06, 2022

  • Journal Article

    The Times They Are Rarely A-Changin': Circadian Regularities in Social Media Use

    Journal of Quantitative Description: Digital Media, 2021

    View Article View abstract

    This paper uses geolocated Twitter histories from approximately 25,000 individuals in 6 different time zones and 3 different countries to construct a proper time-zone dependent hourly baseline for social media activity studies.  We establish that, across multiple regions and time periods, interaction with social media is strongly conditioned by traditional bio-rhythmic or “Circadian” patterns, and that in the United States, this pattern is itself further conditioned by the ideological bent of the user. Using a time series of these histories around the 2016 U.S. Presidential election, we show that external events of great significance can disrupt traditional social media activity patterns, and that this disruption can be significant (in some cases doubling the amplitude and shifting the phase of activity up to an hour). We find that the disruption of use patterns can last an extended period of time, and in many cases, aspects of this disruption would not be detected without a circadian baseline.

    Area of Study

    Date Posted

    Apr 26, 2021

  • Journal Article

    Cracking Open the News Feed: Exploring What U.S. Facebook Users See and Share with Large-Scale Platform Data

    Journal of Quantitative Description: Digital Media, 2021

    View Article View abstract

    In this study, we analyze for the first time newly available engagement data covering millions of web links shared on Facebook to describe how and by which categories of U.S. users different types of news are seen and shared on the platform. We focus on articles from low-credibility news publishers, credible news sources, purveyors of clickbait, and news specifically about politics, which we identify through a combination of curated lists and supervised classifiers. Our results support recent findings that more fake news is shared by older users and conservatives and that both viewing and sharing patterns suggest a preference for ideologically congenial misinformation. We also find that fake news articles related to politics are more popular among older Americans than other types, while the youngest users share relatively more articles with clickbait headlines. Across the platform, however, articles from credible news sources are shared over 5.5 times more often and viewed over 7.5 times more often than articles from low-credibility sources. These findings offer important context for researchers studying the spread and consumption of information — including misinformation — on social media.

    Date Posted

    Apr 26, 2021

  • Journal Article

    YouTube Recommendations and Effects on Sharing Across Online Social Platforms

    Proceedings of the ACM on Human-Computer Interaction, 2021

    View Article View abstract

    In January 2019, YouTube announced it would exclude potentially harmful content from video recommendations but allow such videos to remain on the platform. While this step intends to reduce YouTube's role in propagating such content, continued availability of these videos in other online spaces makes it unclear whether this compromise actually reduces their spread. To assess this impact, we apply interrupted time series models to measure whether different types of YouTube sharing in Twitter and Reddit changed significantly in the eight months around YouTube's announcement. We evaluate video sharing across three curated sets of potentially harmful, anti-social content: a set of conspiracy videos that have been shown to experience reduced recommendations in YouTube, a larger set of videos posted by conspiracy-oriented channels, and a set of videos posted by alternative influence network (AIN) channels. As a control, we also evaluate effects on video sharing in a dataset of videos from mainstream news channels. Results show conspiracy-labeled and AIN videos that have evidence of YouTube's de-recommendation experience a significant decreasing trend in sharing on both Twitter and Reddit. For videos from conspiracy-oriented channels, however, we see no significant effect in Twitter but find a significant increase in the level of conspiracy-channel sharing in Reddit. For mainstream news sharing, we actually see an increase in trend on both platforms, suggesting YouTube's suppressing particular content types has a targeted effect. This work finds evidence that reducing exposure to anti-social videos within YouTube, without deletion, has potential pro-social, cross-platform effects. At the same time, increases in the level of conspiracy-channel sharing raise concerns about content producers' responses to these changes, and platform transparency is needed to evaluate these effects further.

    Date Posted

    Apr 22, 2021

  • Journal Article

    You Won’t Believe Our Results! But They Might: Heterogeneity in Beliefs About the Accuracy of Online Media

    Journal of Experimental Political Science, 2021

    View Article View abstract

    “Clickbait” media has long been espoused as an unfortunate consequence of the rise of digital journalism. But little is known about why readers choose to read clickbait stories. Is it merely curiosity, or might voters think such stories are more likely to provide useful information? We conduct a survey experiment in Italy, where a major political party enthusiastically embraced the esthetics of new media and encouraged their supporters to distrust legacy outlets in favor of online news. We offer respondents a monetary incentive for correct answers to manipulate the relative salience of the motivation for accurate information. This incentive increases differences in the preference for clickbait; older and less educated subjects become even more likely to opt to read a story with a clickbait headline when the incentive to produce a factually correct answer is higher. Our model suggests that a politically relevant subset of the population prefers Clickbait Media because they trust it more.

    Date Posted

    Jan 20, 2021

    Tags

  • Journal Article

    Political Knowledge and Misinformation in the Era of Social Media: Evidence From the 2015 UK Election

    British Journal of Political Science, 2022

    View Article View abstract

    Does social media educate voters, or mislead them? This study measures changes in political knowledge among a panel of voters surveyed during the 2015 UK general election campaign while monitoring the political information to which they were exposed on the Twitter social media platform. The study's panel design permits identification of the effect of information exposure on changes in political knowledge. Twitter use led to higher levels of knowledge about politics and public affairs, as information from news media improved knowledge of politically relevant facts, and messages sent by political parties increased knowledge of party platforms. But in a troubling demonstration of campaigns' ability to manipulate knowledge, messages from the parties also shifted voters' assessments of the economy and immigration in directions favorable to the parties' platforms, leaving some voters with beliefs further from the truth at the end of the campaign than they were at its beginning.

  • Working Paper

    Opinion Change and Learning in the 2016 U.S. Presidential Election: Evidence from a Panel Survey Combined with Direct Observation of Social Media Activity

    Working Paper, September 2020

    View Article View abstract

    The role of the media in influencing people’s attitudes and opinions is difficult to demonstrate because media consumption by survey respondents is usually unobserved in datasets containing information on attitudes and vote choice. This paper leverages behavioral data combined with responses from a multi-wave panel to test whether Democrats who see more stories from liberal news sources on Twitter develop more liberal positions over time and, conversely, whether Republicans are more likely to revise their views in a conservative direction if they are exposed to more news on Twitter from conservative media sources. We find evidence that exposure to ideologically framed information and arguments changes voters’ own positions, but has a limited impact on perceptions of where the candidates stand on the issues.

    Date Posted

    Sep 24, 2020

  • Book
  • 1
  • 2