Academic Research

CSMaP faculty, postdoctoral fellows, and students publish rigorous, peer-reviewed research in top academic journals and post working papers sharing ongoing work.

Search or Filter

  • Journal Article

    News Sharing on Social Media: Mapping the Ideology of News Media, Politicians, and the Mass Public

    Political Analysis, 2024

    View Article View abstract

    This article examines the information sharing behavior of U.S. politicians and the mass public by mapping the ideological sharing space of political news on social media. As data, we use the near-universal currency of online information exchange: web links. We introduce a methodological approach and software to unify the measurement of ideology across social media platforms by using sharing data to jointly estimate the ideology of news media organizations, politicians, and the mass public. Empirically, we show that (1) politicians who share ideologically polarized content share, by far, the most political news and commentary and (2) that the less competitive elections are, the more likely politicians are to share polarized information. These results demonstrate that news and commentary shared by politicians come from a highly unrepresentative set of ideologically extreme legislators and that decreases in election pressures (e.g., by gerrymandering) may encourage polarized sharing behavior.

  • Working Paper

    Understanding Latino Political Engagement and Activity on Social Media

    Working Paper, November 2024

    View Article View abstract

    Social media is used by millions of Americans to access news and politics. Yet there are no studies, to date, examining whether these behaviors systematically vary for those whose political incorporation process is distinct from those in the majority. We fill this void by examining how Latino online political activity compares to that of white Americans and the role of language in Latinos’ online political engagement. We hypothesize that Latino online political activity is comparable to white Americans. Moreover, given media reports suggesting that greater quantities of political misinformation are circulating on Spanish versus English-language social media, we expect that reliance on Spanish-language social media for news predicts beliefs in inaccurate political narratives. Our survey findings, which we believe to be the largest original survey of the online political activity of Latinos and whites, reveal support for these expectations. Latino social media political activity, as measured by sharing/viewing news, talking about politics, and following politicians, is comparable to whites, both in self-reported and digital trace data. Latinos also turned to social media for news about COVID-19 more often than did whites. Finally, Latinos relying on Spanish-language social media usage for news predicts beliefs in election fraud in the 2020 U.S. Presidential election.

  • Journal Article

    Measuring Receptivity to Misinformation at Scale on a Social Media Platform

    PNAS Nexus, 2024

    View Article View abstract

    Measuring the impact of online misinformation is challenging. Traditional measures, such as user views or shares on social media, are incomplete because not everyone who is exposed to misinformation is equally likely to believe it. To address this issue, we developed a method that combines survey data with observational Twitter data to probabilistically estimate the number of users both exposed to and likely to believe a specific news story. As a proof of concept, we applied this method to 139 viral news articles and find that although false news reaches an audience with diverse political views, users who are both exposed and receptive to believing false news tend to have more extreme ideologies. These receptive users are also more likely to encounter misinformation earlier than those who are unlikely to believe it. This mismatch between overall user exposure and receptive user exposure underscores the limitation of relying solely on exposure or interaction data to measure the impact of misinformation, as well as the challenge of implementing effective interventions. To demonstrate how our approach can address this challenge, we then conducted data-driven simulations of common interventions used by social media platforms. We find that these interventions are only modestly effective at reducing exposure among users likely to believe misinformation, and their effectiveness quickly diminishes unless implemented soon after misinformation’s initial spread. Our paper provides a more precise estimate of misinformation’s impact by focusing on the exposure of users likely to believe it, offering insights for effective mitigation strategies on social media.

  • Working Paper

    Concept-Guided Chain-of-Thought Prompting for Pairwise Comparison Scaling of Texts with Large Language Models

    Working Paper, October 2023

    View Article View abstract

    Existing text scaling methods often require a large corpus, struggle with short texts, or require labeled data. We develop a text scaling method that leverages the pattern recognition capabilities of generative large language models (LLMs). Specifically, we propose concept-guided chain-of-thought (CGCoT), which uses prompts designed to summarize ideas and identify target parties in texts to generate concept-specific breakdowns, in many ways similar to guidance for human coder content analysis. CGCoT effectively shifts pairwise text comparisons from a reasoning problem to a pattern recognition problem. We then pairwise compare concept-specific breakdowns using an LLM. We use the results of these pairwise comparisons to estimate a scale using the Bradley-Terry model. We use this approach to scale affective speech on Twitter. Our measures correlate more strongly with human judgments than alternative approaches like Wordfish. Besides a small set of pilot data to develop the CGCoT prompts, our measures require no additional labeled data and produce binary predictions comparable to a RoBERTa-Large model fine-tuned on thousands of human-labeled tweets. We demonstrate how combining substantive knowledge with LLMs can create state-of-the-art measures of abstract concepts.

    Date Posted

    Oct 18, 2023

  • Book

    Computational Social Science for Policy and Quality of Democracy: Public Opinion, Hate Speech, Misinformation, and Foreign Influence Campaigns

    Handbook of Computational Social Science for Policy, 2023

    View Book View abstract

    The intersection of social media and politics is yet another realm in which Computational Social Science has a paramount role to play. In this review, I examine the questions that computational social scientists are attempting to answer – as well as the tools and methods they are developing to do so – in three areas where the rise of social media has led to concerns about the quality of democracy in the digital information era: online hate; misinformation; and foreign influence campaigns. I begin, however, by considering a precursor of these topics – and also a potential hope for social media to be able to positively impact the quality of democracy – by exploring attempts to measure public opinion online using Computational Social Science methods. In all four areas, computational social scientists have made great strides in providing information to policy makers and the public regarding the evolution of these very complex phenomena but in all cases could do more to inform public policy with better access to the necessary data; this point is discussed in more detail in the conclusion of the review.

  • Journal Article

    Exposure to the Russian Internet Research Agency Foreign Influence Campaign on Twitter in the 2016 US Election and Its Relationship to Attitudes and Voting Behavior

    Nature Communications, 2023

    View Article View abstract

    There is widespread concern that foreign actors are using social media to interfere in elections worldwide. Yet data have been unavailable to investigate links between exposure to foreign influence campaigns and political behavior. Using longitudinal survey data from US respondents linked to their Twitter feeds, we quantify the relationship between exposure to the Russian foreign influence campaign and attitudes and voting behavior in the 2016 US election. We demonstrate, first, that exposure to Russian disinformation accounts was heavily concentrated: only 1% of users accounted for 70% of exposures. Second, exposure was concentrated among users who strongly identified as Republicans. Third, exposure to the Russian influence campaign was eclipsed by content from domestic news media and politicians. Finally, we find no evidence of a meaningful relationship between exposure to the Russian foreign influence campaign and changes in attitudes, polarization, or voting behavior. The results have implications for understanding the limits of election interference campaigns on social media.

    Date Posted

    Jan 09, 2023

  • Journal Article

    Using Social Media Data to Reveal Patterns of Policy Engagement in State Legislatures

    State Politics & Policy Quarterly, 2022

    View Article View abstract

    State governments are tasked with making important policy decisions in the United States. How do state legislators use their public communications—particularly social media—to engage with policy debates? Due to previous data limitations, we lack systematic information about whether and how state legislators publicly discuss policy and how this behavior varies across contexts. Using Twitter data and state-of-the-art topic modeling techniques, we introduce a method to study state legislator policy priorities and apply the method to 15 US states in 2018. We show that we are able to accurately capture the policy issues discussed by state legislators with substantially more accuracy than existing methods. We then present initial findings that validate the method and speak to debates in the literature. The paper concludes by discussing promising avenues for future state politics research using this new approach.

    Date Posted

    Oct 18, 2022

  • Journal Article

    Most Users Do Not Follow Political Elites on Twitter; Those Who Do, Show Overwhelming Preferences for Ideological Congruity.

    Science Advances, 2022

    View Article View abstract

    We offer comprehensive evidence of preferences for ideological congruity when people engage with politicians, pundits, and news organizations on social media. Using four years of data (2016-2019) from a random sample of 1.5 million Twitter users, we examine three behaviors studied separately to date: (a) following of in-group vs. out-group elites, (b) sharing in-group vs. out-group information (retweeting), and (c) commenting on the shared information (quote tweeting). We find the majority of users (60%) do not follow any political elites. Those who do, follow in-group elite accounts at much higher rates than out-group accounts (90% vs. 10%), share information from in-group elites 13 times more frequently than from out-group elites, and often add negative comments to the shared out-group information. Conservatives are twice as likely as liberals to share in-group vs. out-group content. These patterns are robust, emerge across issues and political elites, and regardless of users' ideological extremity.

    Date Posted

    Sep 30, 2022

  • Journal Article

    What We Learned About The Gateway Pundit from its Own Web Traffic Data

    Workshop Proceedings of the 16th International AAAI Conference on Web and Social Media, 2022

    View Article View abstract

    To mitigate the spread of false news, researchers need to understand who visits low-quality news sites, what brings people to those sites, and what content they prefer to consume. Due to challenges in observing most direct website traffic, existing research primarily relies on alternative data sources, such as engagement signals from social media posts. However, such signals are at best only proxies for actual website visits. During an audit of far-right news websites, we discovered that The Gateway Pundit (TGP) has made its web traffic data publicly available, giving us a rare opportunity to understand what news pages people actually visit. We collected 68 million web traffic visits to the site over a one-month period and analyzed how people consume news via multiple features. Our referral analysis shows that search engines and social media platforms are the main drivers of traffic; our geo-location analysis reveals that TGP is more popular in counties where more people voted for Trump in 2020. In terms of content, topics related to 2020 US presidential election and 2021 US capital riot have the highest average number of visits. We also use these data to quantify to what degree social media engagement signals correlate with actual web visit counts. To do so, we collect Facebook and Twitter posts with URLs from TGP during the same time period. We show that all engagement signals positively correlate with web visit counts, but with varying correlation strengths. For example, total interaction on Facebook correlates better than Twitter retweet count. Our insights can also help researchers choose the right metrics when they measure the impact of news URLs on social media.

    Date Posted

    Jun 01, 2022

  • Working Paper

    To Moderate, Or Not to Moderate: Strategic Domain Sharing by Congressional Campaigns

    Working Paper, April 2022

    View Article View abstract

    We test whether candidates move to the extremes before a primary but then return to the center for the general election to appeal to the different preferences of each electorate. Incumbents are now more vulnerable to primary challenges than ever as social media offers a viable pathway for fundraising and messaging to challengers, while homogeneity of districts has reduced general election competitiveness. To assess candidates' ideological trajectories, we estimate the revealed ideology of 2020 congressional candidates (incumbents, their primary challengers, and open seat candidates) before and after their primaries, using a homophily-based measure of domains shared on Twitter. This method provides temporally granular data to observe changes in communication within a single election campaign cycle. We find that incumbents did move towards extremes for their primaries and back towards the center for the general election, but only when threatened by a well-funded primary challenge, though non-incumbents did not.

    Date Posted

    Apr 05, 2022

  • Journal Article

    Why Botter: How Pro-Government Bots Fight Opposition in Russia

    American Political Science Review, 2022

    View Article View abstract

    There is abundant anecdotal evidence that nondemocratic regimes are harnessing new digital technologies known as social media bots to facilitate policy goals. However, few previous attempts have been made to systematically analyze the use of bots that are aimed at a domestic audience in autocratic regimes. We develop two alternative theoretical frameworks for predicting the use of pro-regime bots: one which focuses on bot deployment in response to offline protest and the other in response to online protest. We then test the empirical implications of these frameworks with an original collection of Twitter data generated by Russian pro-government bots. We find that the online opposition activities produce stronger reactions from bots than offline protests. Our results provide a lower bound on the effects of bots on the Russian Twittersphere and highlight the importance of bot detection for the study of political communication on social media in nondemocratic regimes.

    Date Posted

    Feb 21, 2022

    Tags

  • Journal Article

    Short of Suspension: How Suspension Warnings Can Reduce Hate Speech on Twitter

    Perspectives on Politics, 2023

    View Article View abstract

    Debates around the effectiveness of high-profile Twitter account suspensions and similar bans on abusive users across social media platforms abound. Yet we know little about the effectiveness of warning a user about the possibility of suspending their account as opposed to outright suspensions in reducing hate speech. With a pre-registered experiment, we provide causal evidence that a warning message can reduce the use of hateful language on Twitter, at least in the short term. We design our messages based on the literature on deterrence, and test versions that emphasize the legitimacy of the sender, the credibility of the message, and the costliness of being suspended. We find that the act of warning a user of the potential consequences of their behavior can significantly reduce their hateful language for one week. We also find that warning messages that aim to appear legitimate in the eyes of the target user seem to be the most effective. In light of these findings, we consider the policy implications of platforms adopting a more aggressive approach to warning users that their accounts may be suspended as a tool for reducing hateful speech online.

    Date Posted

    Nov 22, 2021

  • Working Paper

    Network Embedding Methods for Large Networks in Political Science

    Working Paper, November 2021

    View Article View abstract

    Social networks play an important role in many political science studies. With the rise of social media, these networks have grown in both size and complexity. Analysis of these large networks requires generation of feature representations that can be used in machine learning models. One way to generate these feature representations is to use network embedding methods for learning low-dimensional feature representations of nodes and edges in a network. While there is some literature comparing the advantages and shortcomings of these models, to our knowledge, there has not been any analysis on the applicability of network embedding models to classification tasks in political science. In this paper, we compare the performance of five prominent network embedding methods on prediction of ideology of Twitter users and ideology of Internet domains. We find that LINE provides the best feature representation across all 4 datasets that we use, resulting in the highest performance accuracy. Finally, we provide the guidelines for researchers on the use of these models for their own research.

    Area of Study

    Date Posted

    Nov 12, 2021

    Tags

  • Journal Article

    Twitter Flagged Donald Trump’s Tweets with Election Misinformation: They Continued to Spread Both On and Off the Platform

    Harvard Kennedy School (HKS) Misinformation Review, 2021

    View Article View abstract

    We analyze the spread of Donald Trump’s tweets that were flagged by Twitter using two intervention strategies—attaching a warning label and blocking engagement with the tweet entirely. We find that while blocking engagement on certain tweets limited their diffusion, messages we examined with warning labels spread further on Twitter than those without labels. Additionally, the messages that had been blocked on Twitter remained popular on Facebook, Instagram, and Reddit, being posted more often and garnering more visibility than messages that had either been labeled by Twitter or received no intervention at all. Taken together, our results emphasize the importance of considering content moderation at the ecosystem level.

  • Journal Article

    The Times They Are Rarely A-Changin': Circadian Regularities in Social Media Use

    Journal of Quantitative Description: Digital Media, 2021

    View Article View abstract

    This paper uses geolocated Twitter histories from approximately 25,000 individuals in 6 different time zones and 3 different countries to construct a proper time-zone dependent hourly baseline for social media activity studies.  We establish that, across multiple regions and time periods, interaction with social media is strongly conditioned by traditional bio-rhythmic or “Circadian” patterns, and that in the United States, this pattern is itself further conditioned by the ideological bent of the user. Using a time series of these histories around the 2016 U.S. Presidential election, we show that external events of great significance can disrupt traditional social media activity patterns, and that this disruption can be significant (in some cases doubling the amplitude and shifting the phase of activity up to an hour). We find that the disruption of use patterns can last an extended period of time, and in many cases, aspects of this disruption would not be detected without a circadian baseline.

    Area of Study

    Date Posted

    Apr 26, 2021

  • Journal Article

    YouTube Recommendations and Effects on Sharing Across Online Social Platforms

    Proceedings of the ACM on Human-Computer Interaction, 2021

    View Article View abstract

    In January 2019, YouTube announced it would exclude potentially harmful content from video recommendations but allow such videos to remain on the platform. While this step intends to reduce YouTube's role in propagating such content, continued availability of these videos in other online spaces makes it unclear whether this compromise actually reduces their spread. To assess this impact, we apply interrupted time series models to measure whether different types of YouTube sharing in Twitter and Reddit changed significantly in the eight months around YouTube's announcement. We evaluate video sharing across three curated sets of potentially harmful, anti-social content: a set of conspiracy videos that have been shown to experience reduced recommendations in YouTube, a larger set of videos posted by conspiracy-oriented channels, and a set of videos posted by alternative influence network (AIN) channels. As a control, we also evaluate effects on video sharing in a dataset of videos from mainstream news channels. Results show conspiracy-labeled and AIN videos that have evidence of YouTube's de-recommendation experience a significant decreasing trend in sharing on both Twitter and Reddit. For videos from conspiracy-oriented channels, however, we see no significant effect in Twitter but find a significant increase in the level of conspiracy-channel sharing in Reddit. For mainstream news sharing, we actually see an increase in trend on both platforms, suggesting YouTube's suppressing particular content types has a targeted effect. This work finds evidence that reducing exposure to anti-social videos within YouTube, without deletion, has potential pro-social, cross-platform effects. At the same time, increases in the level of conspiracy-channel sharing raise concerns about content producers' responses to these changes, and platform transparency is needed to evaluate these effects further.

    Date Posted

    Apr 22, 2021

  • Journal Article

    Tweeting Beyond Tahrir: Ideological Diversity and Political Intolerance in Egyptian Twitter Networks

    World Politics, 2021

    View Article View abstract

    Do online social networks affect political tolerance in the highly polarized climate of postcoup Egypt? Taking advantage of the real-time networked structure of Twitter data, the authors find that not only is greater network diversity associated with lower levels of intolerance, but also that longer exposure to a diverse network is linked to less expression of intolerance over time. The authors find that this relationship persists in both elite and non-elite diverse networks. Exploring the mechanisms by which network diversity might affect tolerance, the authors offer suggestive evidence that social norms in online networks may shape individuals’ propensity to publicly express intolerant attitudes. The findings contribute to the political tolerance literature and enrich the ongoing debate over the relationship between online echo chambers and political attitudes and behavior by providing new insights from a repressive authoritarian context.

  • Journal Article

    Trumping Hate on Twitter? Online Hate Speech in the 2016 U.S. Election Campaign and its Aftermath.

    Quarterly Journal of Political Science, 2021

    View Article View abstract

    To what extent did online hate speech and white nationalist rhetoric on Twitter increase over the course of Donald Trump's 2016 presidential election campaign and its immediate aftermath? The prevailing narrative suggests that Trump's political rise — and his unexpected victory — lent legitimacy to and popularized bigoted rhetoric that was once relegated to the dark corners of the Internet. However, our analysis of over 750 million tweets related to the election, in addition to almost 400 million tweets from a random sample of American Twitter users, provides systematic evidence that hate speech did not increase on Twitter over this period. Using both machine-learning-augmented dictionary-based methods and a novel classification approach leveraging data from Reddit communities associated with the alt-right movement, we observe no persistent increase in hate speech or white nationalist language either over the course of the campaign or in the six months following Trump's election. While key campaign events and policy announcements produced brief spikes in hateful language, these bursts quickly dissipated. Overall we find no empirical support for the proposition that Trump's divisive campaign or election increased hate speech on Twitter.

    Date Posted

    Jan 11, 2021

  • Journal Article

    Political Knowledge and Misinformation in the Era of Social Media: Evidence From the 2015 UK Election

    British Journal of Political Science, 2022

    View Article View abstract

    Does social media educate voters, or mislead them? This study measures changes in political knowledge among a panel of voters surveyed during the 2015 UK general election campaign while monitoring the political information to which they were exposed on the Twitter social media platform. The study's panel design permits identification of the effect of information exposure on changes in political knowledge. Twitter use led to higher levels of knowledge about politics and public affairs, as information from news media improved knowledge of politically relevant facts, and messages sent by political parties increased knowledge of party platforms. But in a troubling demonstration of campaigns' ability to manipulate knowledge, messages from the parties also shifted voters' assessments of the economy and immigration in directions favorable to the parties' platforms, leaving some voters with beliefs further from the truth at the end of the campaign than they were at its beginning.

  • Working Paper

    Opinion Change and Learning in the 2016 U.S. Presidential Election: Evidence from a Panel Survey Combined with Direct Observation of Social Media Activity

    Working Paper, September 2020

    View Article View abstract

    The role of the media in influencing people’s attitudes and opinions is difficult to demonstrate because media consumption by survey respondents is usually unobserved in datasets containing information on attitudes and vote choice. This paper leverages behavioral data combined with responses from a multi-wave panel to test whether Democrats who see more stories from liberal news sources on Twitter develop more liberal positions over time and, conversely, whether Republicans are more likely to revise their views in a conservative direction if they are exposed to more news on Twitter from conservative media sources. We find evidence that exposure to ideologically framed information and arguments changes voters’ own positions, but has a limited impact on perceptions of where the candidates stand on the issues.

    Date Posted

    Sep 24, 2020

  • Book
  • Journal Article

    Content-Based Features Predict Social Media Influence Operations

    Science Advances, 2020

    View Article View abstract

    We study how easy it is to distinguish influence operations from organic social media activity by assessing the performance of a platform-agnostic machine learning approach. Our method uses public activity to detect content that is part of coordinated influence operations based on human-interpretable features derived solely from content. We test this method on publicly available Twitter data on Chinese, Russian, and Venezuelan troll activity targeting the United States, as well as the Reddit dataset of Russian influence efforts. To assess how well content-based features distinguish these influence operations from random samples of general and political American users, we train and test classifiers on a monthly basis for each campaign across five prediction tasks. Content-based features perform well across period, country, platform, and prediction task. Industrialized production of influence campaign content leaves a distinctive signal in user-generated content that allows tracking of campaigns from month to month and across different accounts.

    Date Posted

    Jul 22, 2020

  • Journal Article

    Cross-Platform State Propaganda: Russian Trolls on Twitter and YouTube During the 2016 U.S. Presidential Election

    The International Journal of Press/Politics, 2020

    View Article View abstract

    This paper investigates online propaganda strategies of the Internet Research Agency (IRA)—Russian “trolls”—during the 2016 U.S. presidential election. We assess claims that the IRA sought either to (1) support Donald Trump or (2) sow discord among the U.S. public by analyzing hyperlinks contained in 108,781 IRA tweets. Our results show that although IRA accounts promoted links to both sides of the ideological spectrum, “conservative” trolls were more active than “liberal” ones. The IRA also shared content across social media platforms, particularly YouTube—the second-most linked destination among IRA tweets. Although overall news content shared by trolls leaned moderate to conservative, we find troll accounts on both sides of the ideological spectrum, and these accounts maintain their political alignment. Links to YouTube videos were decidedly conservative, however. While mixed, this evidence is consistent with the IRA’s supporting the Republican campaign, but the IRA’s strategy was multifaceted, with an ideological division of labor among accounts. We contextualize these results as consistent with a pre-propaganda strategy. This work demonstrates the need to view political communication in the context of the broader media ecology, as governments exploit the interconnected information ecosystem to pursue covert propaganda strategies.

    Date Posted

    Jul 01, 2020

  • Journal Article

    Political Psycholinguistics: A Comprehensive Analysis of the Language Habits of Liberal and Conservative Social Media Users.

    Journal of Personality and Social Psychology, 2020

    View Article View abstract

    For nearly a century social scientists have sought to understand left–right ideological differences in values, motives, and thinking styles. Much progress has been made, but — as in other areas of research — this work has been criticized for relying on small and statistically unrepresentative samples and the use of reactive, self-report measures that lack ecological validity. In an effort to overcome these limitations, we employed automated text analytic methods to investigate the spontaneous, naturally occurring use of language in nearly 25,000 Twitter users. We derived 27 hypotheses from the literature on political psychology and tested them using 32 individual dictionaries. In 23 cases, we observed significant differences in the linguistic styles of liberals and conservatives. For instance, liberals used more language that conveyed benevolence, whereas conservatives used more language pertaining to threat, power, tradition, resistance to change, certainty, security, anger, anxiety, and negative emotion in general. In 17 cases, there were also significant effects of ideological extremity. For instance, moderates used more benevolent language, whereas extremists used more language pertaining to inhibition, tentativeness, affiliation, resistance to change, certainty, security, anger, anxiety, negative affect, swear words, and death-related language. These research methods, which are easily adaptable, open up new and unprecedented opportunities for conducting unobtrusive research in psycholinguistics and political psychology with large and diverse samples.

    Date Posted

    Jan 09, 2020

  • Journal Article

    Don’t Republicans Tweet Too? Using Twitter to Assess the Consequences of Political Endorsements by Celebrities

    Perspectives on Politics, 2020

    View Article View abstract

    Michael Jordan supposedly justified his decision to stay out of politics by noting that Republicans buy sneakers too. In the social media era, the name of the game for celebrities is engagement with fans. So why then do celebrities risk talking about politics on social media, which is likely to antagonize a portion of their fan base? With this question in mind, we analyze approximately 220,000 tweets from 83 celebrities who chose to endorse a presidential candidate in the 2016 U.S. presidential election campaign to assess whether there is a cost — defined in terms of engagement on Twitter — for celebrities who discuss presidential candidates. We also examine whether celebrities behave similarly to other campaign surrogates in being more likely to take on the “attack dog” role by going negative more often than going positive. More specifically, we document how often celebrities of distinct political preferences tweet about Donald Trump, Bernie Sanders, and Hillary Clinton, and we show that followers of opinionated celebrities do not withhold engagement when entertainers become politically mobilized and do indeed often go negative. Interestingly, in some cases political content from celebrities actually turns out to be more popular than typical lifestyle tweets.


    Date Posted

    Sep 06, 2019