Academic Research

CSMaP faculty, postdoctoral fellows, and students publish rigorous, peer-reviewed research in top academic journals and post working papers sharing ongoing work.

Search or Filter

  • Journal Article

    News Sharing on Social Media: Mapping the Ideology of News Media, Politicians, and the Mass Public

    Political Analysis, 2024

    View Article View abstract

    This article examines the information sharing behavior of U.S. politicians and the mass public by mapping the ideological sharing space of political news on social media. As data, we use the near-universal currency of online information exchange: web links. We introduce a methodological approach and software to unify the measurement of ideology across social media platforms by using sharing data to jointly estimate the ideology of news media organizations, politicians, and the mass public. Empirically, we show that (1) politicians who share ideologically polarized content share, by far, the most political news and commentary and (2) that the less competitive elections are, the more likely politicians are to share polarized information. These results demonstrate that news and commentary shared by politicians come from a highly unrepresentative set of ideologically extreme legislators and that decreases in election pressures (e.g., by gerrymandering) may encourage polarized sharing behavior.

  • Journal Article

    Measuring Receptivity to Misinformation at Scale on a Social Media Platform

    PNAS Nexus, 2024

    View Article View abstract

    Measuring the impact of online misinformation is challenging. Traditional measures, such as user views or shares on social media, are incomplete because not everyone who is exposed to misinformation is equally likely to believe it. To address this issue, we developed a method that combines survey data with observational Twitter data to probabilistically estimate the number of users both exposed to and likely to believe a specific news story. As a proof of concept, we applied this method to 139 viral news articles and find that although false news reaches an audience with diverse political views, users who are both exposed and receptive to believing false news tend to have more extreme ideologies. These receptive users are also more likely to encounter misinformation earlier than those who are unlikely to believe it. This mismatch between overall user exposure and receptive user exposure underscores the limitation of relying solely on exposure or interaction data to measure the impact of misinformation, as well as the challenge of implementing effective interventions. To demonstrate how our approach can address this challenge, we then conducted data-driven simulations of common interventions used by social media platforms. We find that these interventions are only modestly effective at reducing exposure among users likely to believe misinformation, and their effectiveness quickly diminishes unless implemented soon after misinformation’s initial spread. Our paper provides a more precise estimate of misinformation’s impact by focusing on the exposure of users likely to believe it, offering insights for effective mitigation strategies on social media.

  • Working Paper

    Misinformation Exposure Beyond Traditional Feeds: Evidence from a WhatsApp Deactivation Experiment in Brazil

    Working Paper, May 2024

    View Article View abstract

    In most advanced democracies, concerns about the spread of misinformation are typically associated with feed-based social media platforms like Twitter and Facebook. These platforms also account for the vast majority of research on the topic. However, in most of the world, particularly in Global South countries, misinformation often reaches citizens through social media messaging apps, particularly WhatsApp. To fill the resulting gap in the literature, we conducted a multimedia deactivation experiment to test the impact of reducing exposure to potential sources of misinformation on WhatsApp during the weeks leading up to the 2022 Presidential election in Brazil. We find that this intervention significantly reduced participants’ exposure to false rumors circulating widely during the election. However, consistent with theories of mass media minimal effects, a short-term reduction in exposure to misinformation ahead of the election did not lead to significant changes in belief accuracy, political polarization, or well-being.

  • Journal Article

    The Effects of Facebook and Instagram on the 2020 Election: A Deactivation Experiment

    • Hunt Alcott, 
    • Matthew Gentzkow, 
    • Winter Mason, 
    • Arjun Wilkins, 
    • Pablo Barberá
    • Taylor Brown, 
    • Juan Carlos Cisneros, 
    • Adriana Crespo-Tenorio, 
    • Drew Dimmery, 
    • Deen Freelon, 
    • Sandra González-Bailón
    • Andrew M. Guess
    • Young Mie Kim, 
    • David Lazer, 
    • Neil Malhotra, 
    • Devra Moehler, 
    • Sameer Nair-Desai, 
    • Houda Nait El Barj, 
    • Brendan Nyhan, 
    • Ana Carolina Paixao de Queiroz, 
    • Jennifer Pan, 
    • Jaime Settle, 
    • Emily Thorson, 
    • Rebekah Tromble, 
    • Carlos Velasco Rivera, 
    • Benjamin Wittenbrink, 
    • Magdalena Wojcieszak
    • Saam Zahedian, 
    • Annie Franco, 
    • Chad Kiewiet De Jong, 
    • Natalie Jomini Stroud, 
    • Joshua A. Tucker

    Proceedings of the National Academy of Sciences, 2024

    View Article View abstract

    We study the effect of Facebook and Instagram access on political beliefs, attitudes, and behavior by randomizing a subset of 19,857 Facebook users and 15,585 Instagram users to deactivate their accounts for 6 wk before the 2020 U.S. election. We report four key findings. First, both Facebook and Instagram deactivation reduced an index of political participation (driven mainly by reduced participation online). Second, Facebook deactivation had no significant effect on an index of knowledge, but secondary analyses suggest that it reduced knowledge of general news while possibly also decreasing belief in misinformation circulating online. Third, Facebook deactivation may have reduced self-reported net votes for Trump, though this effect does not meet our preregistered significance threshold. Finally, the effects of both Facebook and Instagram deactivation on affective and issue polarization, perceived legitimacy of the election, candidate favorability, and voter turnout were all precisely estimated and close to zero.

  • Book

    Online Data and the Insurrection

    Media and January 6th, 2024

    View Book View abstract

    Online data is key to understanding the leadup to the January 6 insurrection, including how and why election fraud conspiracies spread online, how conspiracy groups organized online to participate in the insurrection, and other factors of online life that led to the insurrection. However, there are significant challenges in accessing data for this research. First, platforms restrict which researchers get access to data, as well as what researchers can do with the data they access. Second, this data is ephemeral; that is, once users or the platform remove the data, researchers can no longer access it. These factors affect what research questions can ever be asked and answered.

  • Journal Article

    Replicating the Effects of Facebook Deactivation in an Ethnically Polarized Setting

    Research & Politics, 2023

    View Article View abstract

    The question of how social media usage impacts societal polarization continues to generate great interest among both the research community and broader public. Nevertheless, there are still very few rigorous empirical studies of the causal impact of social media usage on polarization. To explore this question, we replicate the only published study to date that tests the effects of social media cessation on interethnic attitudes (Asimovic et al., 2021). In a study situated in Bosnia and Herzegovina, the authors found that deactivating from Facebook for a week around genocide commemoration in Bosnia and Herzegovina had a negative effect on users’ attitudes toward ethnic outgroups, with the negative effect driven by users with more ethnically homogenous offline networks. Does this finding extend to other settings? In a pre-registered replication study, we implement the same research design in a different ethnically polarized setting: Cyprus. We are not able to replicate the main effect found in Asimovic et al. (2021): in Cyprus, we cannot reject the null hypothesis of no effect. We do, however, find a significant interaction between the heterogeneity of users’ offline networks and the deactivation treatment within our 2021 subsample, consistent with the pattern from Bosnia and Herzegovina. We also find support for recent findings (Allcott et al., 2020; Asimovic et al., 2021) that Facebook deactivation leads to a reduction in anxiety levels and suggestive evidence of a reduction in knowledge of current news, though the latter is again limited to our 2021 subsample.

    Date Posted

    Oct 18, 2023

  • Working Paper

    Concept-Guided Chain-of-Thought Prompting for Pairwise Comparison Scaling of Texts with Large Language Models

    Working Paper, October 2023

    View Article View abstract

    Existing text scaling methods often require a large corpus, struggle with short texts, or require labeled data. We develop a text scaling method that leverages the pattern recognition capabilities of generative large language models (LLMs). Specifically, we propose concept-guided chain-of-thought (CGCoT), which uses prompts designed to summarize ideas and identify target parties in texts to generate concept-specific breakdowns, in many ways similar to guidance for human coder content analysis. CGCoT effectively shifts pairwise text comparisons from a reasoning problem to a pattern recognition problem. We then pairwise compare concept-specific breakdowns using an LLM. We use the results of these pairwise comparisons to estimate a scale using the Bradley-Terry model. We use this approach to scale affective speech on Twitter. Our measures correlate more strongly with human judgments than alternative approaches like Wordfish. Besides a small set of pilot data to develop the CGCoT prompts, our measures require no additional labeled data and produce binary predictions comparable to a RoBERTa-Large model fine-tuned on thousands of human-labeled tweets. We demonstrate how combining substantive knowledge with LLMs can create state-of-the-art measures of abstract concepts.

    Date Posted

    Oct 18, 2023

  • Working Paper

    Reducing Prejudice and Support for Religious Nationalism Through Conversations on WhatsApp

    Working Paper, September 2023

    View Article View abstract

    Can a series of online conversations with a marginalized outgroup member improve majority group members’ attitudes about that outgroup? While the intergroup contact literature provides (mixed) insights about the effects of extended interactions between groups, less is known about how relatively short and casual interactions may play out in highly polarized settings. In an experiment in India, I bring together Hindus and Muslims for five days of conversations on WhatsApp, a popular messaging platform, to investigate the extent to which chatting with a Muslim about randomly assigned discussion prompts affects Hindus’ perceptions of Muslims and approval for mainstream religious nationalist statements. I find that intergroup conversations greatly reduce prejudice against Muslims and approval for religious nationalist statements at least two to three weeks post-conversation. Intergroup conversations about non-political issues are especially effective at reducing prejudice, while conversations about politics substantially decrease support for religious nationalism. I further show how political conversations and non-political conversations affect attitudes through distinct mechanisms.

    Area of Study

    Date Posted

    Sep 09, 2023

    Tags

  • Journal Article

    Like-Minded Sources On Facebook Are Prevalent But Not Polarizing

    • Brendan Nyhan, 
    • Jaime Settle, 
    • Emily Thorson, 
    • Magdalena Wojcieszak
    • Pablo Barberá
    • Annie Y. Chen, 
    • Hunt Alcott, 
    • Taylor Brown, 
    • Adriana Crespo-Tenorio, 
    • Drew Dimmery, 
    • Deen Freelon, 
    • Matthew Gentzkow, 
    • Sandra González-Bailón
    • Andrew M. Guess
    • Edward Kennedy, 
    • Young Mie Kim, 
    • David Lazer, 
    • Neil Malhotra, 
    • Devra Moehler, 
    • Jennifer Pan, 
    • Daniel Robert Thomas, 
    • Rebekah Tromble, 
    • Carlos Velasco Rivera, 
    • Arjun Wilkins, 
    • Beixian Xiong, 
    • Chad Kiewiet De Jong, 
    • Annie Franco, 
    • Winter Mason, 
    • Natalie Jomini Stroud, 
    • Joshua A. Tucker

    Nature, 2023

    View Article View abstract

    Many critics raise concerns about the prevalence of ‘echo chambers’ on social media and their potential role in increasing political polarization. However, the lack of available data and the challenges of conducting large-scale field experiments have made it difficult to assess the scope of the problem1,2. Here we present data from 2020 for the entire population of active adult Facebook users in the USA showing that content from ‘like-minded’ sources constitutes the majority of what people see on the platform, although political information and news represent only a small fraction of these exposures. To evaluate a potential response to concerns about the effects of echo chambers, we conducted a multi-wave field experiment on Facebook among 23,377 users for whom we reduced exposure to content from like-minded sources during the 2020 US presidential election by about one-third. We found that the intervention increased their exposure to content from cross-cutting sources and decreased exposure to uncivil language, but had no measurable effects on eight preregistered attitudinal measures such as affective polarization, ideological extremity, candidate evaluations and belief in false claims. These precisely estimated results suggest that although exposure to content from like-minded sources on social media is common, reducing its prevalence during the 2020 US presidential election did not correspondingly reduce polarization in beliefs or attitudes.

  • Journal Article

    How Do Social Media Feed Algorithms Affect Attitudes and Behavior in an Election Campaign?

    • Andrew M. Guess
    • Neil Malhotra, 
    • Jennifer Pan, 
    • Pablo Barberá
    • Hunt Alcott, 
    • Taylor Brown, 
    • Adriana Crespo-Tenorio, 
    • Drew Dimmery, 
    • Deen Freelon, 
    • Matthew Gentzkow, 
    • Sandra González-Bailón
    • Edward Kennedy, 
    • Young Mie Kim, 
    • David Lazer, 
    • Devra Moehler, 
    • Brendan Nyhan, 
    • Jaime Settle, 
    • Calos Velasco-Rivera, 
    • Daniel Robert Thomas, 
    • Emily Thorson, 
    • Rebekah Tromble, 
    • Beixian Xiong, 
    • Chad Kiewiet De Jong, 
    • Annie Franco, 
    • Winter Mason, 
    • Natalie Jomini Stroud, 
    • Joshua A. Tucker

    Science, 2023

    View Article View abstract

    We investigated the effects of Facebook’s and Instagram’s feed algorithms during the 2020 US election. We assigned a sample of consenting users to reverse-chronologically-ordered feeds instead of the default algorithms. Moving users out of algorithmic feeds substantially decreased the time they spent on the platforms and their activity. The chronological feed also affected exposure to content: The amount of political and untrustworthy content they saw increased on both platforms, the amount of content classified as uncivil or containing slur words they saw decreased on Facebook, and the amount of content from moderate friends and sources with ideologically mixed audiences they saw increased on Facebook. Despite these substantial changes in users’ on-platform experience, the chronological feed did not significantly alter levels of issue polarization, affective polarization, political knowledge, or other key attitudes during the 3-month study period.

  • Journal Article

    Reshares on Social Media Amplify Political News But Do Not Detectably Affect Beliefs or Opinions

    • Andrew M. Guess
    • Neil Malhotra, 
    • Jennifer Pan, 
    • Pablo Barberá
    • Hunt Alcott, 
    • Taylor Brown, 
    • Adriana Crespo-Tenorio, 
    • Drew Dimmery, 
    • Deen Freelon, 
    • Matthew Gentzkow, 
    • Sandra González-Bailón
    • Edward Kennedy, 
    • Young Mie Kim, 
    • David Lazer, 
    • Devra Moehler, 
    • Brendan Nyhan, 
    • Carlos Velasco Rivera, 
    • Jaime Settle, 
    • Daniel Robert Thomas, 
    • Emily Thorson, 
    • Rebekah Tromble, 
    • Arjun Wilkins, 
    • Magdalena Wojcieszak
    • Beixian Xiong, 
    • Chad Kiewiet De Jong, 
    • Annie Franco, 
    • Winter Mason, 
    • Natalie Jomini Stroud, 
    • Joshua A. Tucker

    Science, 2023

    View Article View abstract

    We studied the effects of exposure to reshared content on Facebook during the 2020 US election by assigning a random set of consenting, US-based users to feeds that did not contain any reshares over a 3-month period. We find that removing reshared content substantially decreases the amount of political news, including content from untrustworthy sources, to which users are exposed; decreases overall clicks and reactions; and reduces partisan news clicks. Further, we observe that removing reshared content produces clear decreases in news knowledge within the sample, although there is some uncertainty about how this would generalize to all users. Contrary to expectations, the treatment does not significantly affect political polarization or any measure of individual-level political attitudes.

  • Journal Article

    Asymmetric Ideological Segregation In Exposure To Political News on Facebook

    • Sandra González-Bailón
    • David Lazer, 
    • Pablo Barberá
    • Meiqing Zhang, 
    • Hunt Alcott, 
    • Taylor Brown, 
    • Adriana Crespo-Tenorio, 
    • Deen Freelon, 
    • Matthew Gentzkow, 
    • Andrew M. Guess
    • Shanto Iyengar, 
    • Young Mie Kim, 
    • Neil Malhotra, 
    • Devra Moehler, 
    • Brendan Nyhan, 
    • Jennifer Pan, 
    • Caros Velasco Rivera, 
    • Jaime Settle, 
    • Emily Thorson, 
    • Rebekah Tromble, 
    • Arjun Wilkins, 
    • Magdalena Wojcieszak
    • Chad Kiewiet De Jong, 
    • Annie Franco, 
    • Winter Mason, 
    • Joshua A. Tucker
    • Natalie Jomini Stroud

    Science, 2023

    View Article View abstract

    Does Facebook enable ideological segregation in political news consumption? We analyzed exposure to news during the US 2020 election using aggregated data for 208 million US Facebook users. We compared the inventory of all political news that users could have seen in their feeds with the information that they saw (after algorithmic curation) and the information with which they engaged. We show that (i) ideological segregation is high and increases as we shift from potential exposure to actual exposure to engagement; (ii) there is an asymmetry between conservative and liberal audiences, with a substantial corner of the news ecosystem consumed exclusively by conservatives; and (iii) most misinformation, as identified by Meta’s Third-Party Fact-Checking Program, exists within this homogeneously conservative corner, which has no equivalent on the liberal side. Sources favored by conservative audiences were more prevalent on Facebook’s news ecosystem than those favored by liberals.

  • Book

    Computational Social Science for Policy and Quality of Democracy: Public Opinion, Hate Speech, Misinformation, and Foreign Influence Campaigns

    Handbook of Computational Social Science for Policy, 2023

    View Book View abstract

    The intersection of social media and politics is yet another realm in which Computational Social Science has a paramount role to play. In this review, I examine the questions that computational social scientists are attempting to answer – as well as the tools and methods they are developing to do so – in three areas where the rise of social media has led to concerns about the quality of democracy in the digital information era: online hate; misinformation; and foreign influence campaigns. I begin, however, by considering a precursor of these topics – and also a potential hope for social media to be able to positively impact the quality of democracy – by exploring attempts to measure public opinion online using Computational Social Science methods. In all four areas, computational social scientists have made great strides in providing information to policy makers and the public regarding the evolution of these very complex phenomena but in all cases could do more to inform public policy with better access to the necessary data; this point is discussed in more detail in the conclusion of the review.

  • Journal Article

    What’s Not to Like? Facebook Page Likes Reveal Limited Polarization in Lifestyle Preferences

    Political Communication, 2021

    View Article View abstract

    Increasing levels of political animosity in the United States invite speculation about whether polarization extends to aspects of daily life. However, empirical study about the relationship between political ideologies and lifestyle choices is limited by a lack of comprehensive data. In this research, we combine survey and Facebook Page “likes” data from more than 1,200 respondents to investigate the extent of polarization in lifestyle domains. Our results indicate that polarization is present in page categories that are somewhat related to politics – such as opinion leaders, partisan news sources, and topics related to identity and religion – but, perhaps surprisingly, it is mostly not evident in other domains, including sports, food, and music. On the individual level, we find that people who are higher in political news interest and have stronger ideological predispositions have a greater tendency to “like” ideologically homogeneous pages across categories. Our evidence, drawn from rare digital trace data covering more than 5,000 pages, adds nuance to the narrative of widespread polarization across lifestyle sectors and it suggests domains in which cross-cutting preferences are still observed in American life.

    Area of Study

    Date Posted

    Nov 25, 2021

  • Journal Article

    Testing the Effects of Facebook Usage in an Ethnically Polarized Setting

    Proceedings of the National Academy of Sciences, 2021

    View Article View abstract

    Despite the belief that social media is altering intergroup dynamics—bringing people closer or further alienating them from one another—the impact of social media on interethnic attitudes has yet to be rigorously evaluated, especially within areas with tenuous interethnic relations. We report results from a randomized controlled trial in Bosnia and Herzegovina (BiH), exploring the effects of exposure to social media during 1 wk around genocide remembrance in July 2019 on a set of interethnic attitudes of Facebook users. We find evidence that, counter to preregistered expectations, people who deactivated their Facebook profiles report lower regard for ethnic outgroups than those who remained active. Moreover, we present additional evidence suggesting that this effect is likely conditional on the level of ethnic heterogeneity of respondents’ residence. We also extend the analysis to include measures of subjective well-being and knowledge of news. Here, we find that Facebook deactivation leads to suggestive improvements in subjective wellbeing and a decrease in knowledge of current events, replicating results from recent research in the United States in a very different context, thus increasing our confidence in the generalizability of these effects.

    Area of Study

    Date Posted

    Jun 22, 2021

  • Journal Article

    Tweeting Beyond Tahrir: Ideological Diversity and Political Intolerance in Egyptian Twitter Networks

    World Politics, 2021

    View Article View abstract

    Do online social networks affect political tolerance in the highly polarized climate of postcoup Egypt? Taking advantage of the real-time networked structure of Twitter data, the authors find that not only is greater network diversity associated with lower levels of intolerance, but also that longer exposure to a diverse network is linked to less expression of intolerance over time. The authors find that this relationship persists in both elite and non-elite diverse networks. Exploring the mechanisms by which network diversity might affect tolerance, the authors offer suggestive evidence that social norms in online networks may shape individuals’ propensity to publicly express intolerant attitudes. The findings contribute to the political tolerance literature and enrich the ongoing debate over the relationship between online echo chambers and political attitudes and behavior by providing new insights from a repressive authoritarian context.

  • Journal Article

    Political Sectarianism in America

    • Eli J. Finkel, 
    • Christopher A. Bail, 
    • Mina Cikara, 
    • Peter H. Ditto, 
    • Shanto Iyengar, 
    • Samara Klar, 
    • Lilliana Mason, 
    • Mary C. McGrath, 
    • Brendan Nyhan, 
    • David G. Rand, 
    • Linda Skitka, 
    • Joshua A. Tucker
    • Jay J. Van Bavel
    • Cynthia S. Wang, 
    • James N. Druckman

    Science, 2020

    View Article View abstract

    Political polarization, a concern in many countries, is especially acrimonious in the United States. For decades, scholars have studied polarization as an ideological matter — how strongly Democrats and Republicans diverge vis-à-vis political ideals and policy goals. Such competition among groups in the marketplace of ideas is a hallmark of a healthy democracy. But more recently, researchers have identified a second type of polarization, one focusing less on triumphs of ideas than on dominating the abhorrent supporters of the opposing party. This literature has produced a proliferation of insights and constructs but few interdisciplinary efforts to integrate them. We offer such an integration, pinpointing the superordinate construct of political sectarianism and identifying its three core ingredients: othering, aversion, and moralization. We then consider the causes of political sectarianism and its consequences for U.S. society — especially the threat it poses to democracy. Finally, we propose interventions for minimizing its most corrosive aspects.

    Area of Study

    Date Posted

    Oct 30, 2020

  • Book
  • Journal Article

    Political Psycholinguistics: A Comprehensive Analysis of the Language Habits of Liberal and Conservative Social Media Users.

    Journal of Personality and Social Psychology, 2020

    View Article View abstract

    For nearly a century social scientists have sought to understand left–right ideological differences in values, motives, and thinking styles. Much progress has been made, but — as in other areas of research — this work has been criticized for relying on small and statistically unrepresentative samples and the use of reactive, self-report measures that lack ecological validity. In an effort to overcome these limitations, we employed automated text analytic methods to investigate the spontaneous, naturally occurring use of language in nearly 25,000 Twitter users. We derived 27 hypotheses from the literature on political psychology and tested them using 32 individual dictionaries. In 23 cases, we observed significant differences in the linguistic styles of liberals and conservatives. For instance, liberals used more language that conveyed benevolence, whereas conservatives used more language pertaining to threat, power, tradition, resistance to change, certainty, security, anger, anxiety, and negative emotion in general. In 17 cases, there were also significant effects of ideological extremity. For instance, moderates used more benevolent language, whereas extremists used more language pertaining to inhibition, tentativeness, affiliation, resistance to change, certainty, security, anger, anxiety, negative affect, swear words, and death-related language. These research methods, which are easily adaptable, open up new and unprecedented opportunities for conducting unobtrusive research in psycholinguistics and political psychology with large and diverse samples.

    Date Posted

    Jan 09, 2020

  • Working Paper

    Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature

    Hewlett Foundation, 2018

    View Article View abstract

    The following report is intended to provide an overview of the current state of the literature on the relationship between social media; political polarization; and political “disinformation,” a term used to encompass a wide range of types of information about politics found online, including “fake news,” rumors, deliberately factually incorrect information, inadvertently factually incorrect information, politically slanted information, and “hyperpartisan” news. The review of the literature is provided in six separate sections, each of which can be read individually but that cumulatively are intended to provide an overview of what is known—and unknown—about the relationship between social media, political polarization, and disinformation. The report concludes by identifying key gaps in our understanding of these phenomena and the data that are needed to address them.

    Date Posted

    Mar 19, 2018

  • Journal Article

    Moral Discourse in the Twitterverse: Effects of Ideology and Political Sophistication on Language Use Among U.S. Citizens and Members of Congress

    Journal of Language and Politics, 2018

    View Article View abstract

    We analyzed Twitter language to explore hypotheses derived from moral foundations theory, which suggests that liberals and conservatives prioritize different values. In Study 1, we captured 11 million tweets from nearly 25,000 U.S. residents and observed that liberals expressed fairness concerns more often than conservatives, whereas conservatives were more likely to express concerns about group loyalty, authority, and purity. Increasing political sophistication exacerbated ideological differences in authority and group loyalty. At low levels of sophistication, liberals used more harm language, but at high levels of sophistication conservatives referenced harm more often. In Study 2, we analyzed 59,000 tweets from 388 members of the U.S. Congress. Liberal legislators used more fairness- and harm-related words, whereas conservative legislators used more authority-related words. Unexpectedly, liberal legislators used more language pertaining to group loyalty and purity. Follow-up analyses suggest that liberals and conservatives in Congress use similar words to emphasize different policy priorities.

  • Journal Article

    Tweeting Identity? Ukrainian, Russian and #EuroMaidan

    Journal of Comparative Economics, 2016

    View Article View abstract

    Why and when do group identities become salient? Existing scholarship has suggested that insecurity and competition over political and economic resources as well as increased perceptions of threat from the out-group tend to increase the salience of ethnic identities. Most of the work on ethnicity, however, is either experimental and deals with how people respond once identity has already been primed, is based on self-reported measures of identity, or driven by election results. In contrast, here we examine events in Ukraine from late 2013 (the beginning of the Euromaidan protests) through the end of 2014 to see if particular moments of heightened political tension led to increased identification as either “Russian” or “Ukrainian” among Ukrainian citizens. In tackling this question, we use a novel methodological approach by testing the hypothesis that those who prefer to use Ukrainian to communicate on Twitter will use Ukrainian (at the expense of Russian) following moments of heightened political awareness and those who prefer to use Russian will do the opposite. Interestingly, our primary finding is a negative result: we do not find evidence that key political events in the Ukrainian crisis led to a reversion to the language of choice at the aggregate level, which is interesting given how much ink has been spilt on the question of the extent to which Euromaidan reflected an underlying Ukrainian vs. Russian conflict. However, we unexpectedly find that both those who prefer Russian and those who prefer Ukrainian begin using Russian with a greater frequency following the annexation of Crimea, thus contributing a whole new set of puzzles – and a method for exploring these puzzles – that can serve as a basis for future research.

    Area of Study

    Date Posted

    Dec 21, 2015

    Tags

  • Journal Article

    Birds of the Same Feather Tweet Together: Bayesian Ideal Point Estimation Using Twitter Data

    Political Analysis, 2015

    View Article View abstract

    Politicians and citizens increasingly engage in political conversations on social media outlets such as Twitter. In this article, I show that the structure of the social networks in which they are embedded can be a source of information about their ideological positions. Under the assumption that social networks are homophilic, I develop a Bayesian Spatial Following model that considers ideology as a latent variable, whose value can be inferred by examining which politics actors each user is following. This method allows us to estimate ideology for more actors than any existing alternative, at any point in time and across many polities. I apply this method to estimate ideal points for a large sample of both elite and mass public Twitter users in the United States and five European countries. The estimated positions of legislators and political parties replicate conventional measures of ideology. The method is also able to successfully classify individuals who state their political preferences publicly and a sample of users matched with their party registration records. To illustrate the potential contribution of these estimates, I examine the extent to which online behavior during the 2012 U.S. presidential election campaign is clustered along ideological lines.

  • Journal Article

    Tweeting From Left to Right: Is Online Political Communication More Than an Echo Chamber?

    Psychological Science, 2015

    View Article View abstract

    We estimated ideological preferences of 3.8 million Twitter users and, using a data set of nearly 150 million tweets concerning 12 political and nonpolitical issues, explored whether online communication resembles an “echo chamber” (as a result of selective exposure and ideological segregation) or a “national conversation.” We observed that information was exchanged primarily among individuals with similar ideological preferences in the case of political issues (e.g., 2012 presidential election, 2013 government shutdown) but not many other current events (e.g., 2013 Boston Marathon bombing, 2014 Super Bowl). Discussion of the Newtown shootings in 2012 reflected a dynamic process, beginning as a national conversation before transforming into a polarized exchange. With respect to both political and nonpolitical issues, liberals were more likely than conservatives to engage in cross-ideological dissemination; this is an important asymmetry with respect to the structure of communication that is consistent with psychological theory and research bearing on ideological differences in epistemic, existential, and relational motivation. Overall, we conclude that previous work may have overestimated the degree of ideological segregation in social-media usage.

    Date Posted

    Aug 21, 2015