Academic Research

CSMaP faculty, postdoctoral fellows, and students publish rigorous, peer-reviewed research in top academic journals and post working papers sharing ongoing work.

Search or Filter

  • Journal Article

    Estimating the Ideology of Political YouTube Videos

    Political Analysis, 2024

    View Article View abstract

    We present a method for estimating the ideology of political YouTube videos. As online media increasingly influences how people engage with politics, so does the importance of quantifying the ideology of such media for research. The subfield of estimating ideology as a latent variable has often focused on traditional actors such as legislators, while more recent work has used social media data to estimate the ideology of ordinary users, political elites, and media sources. We build on this work by developing a method to estimate the ideologies of YouTube videos, an important subset of media, based on their accompanying text metadata. First, we take Reddit posts linking to YouTube videos and use correspondence analysis to place those videos in an ideological space. We then train a text-based model with those estimated ideologies as training labels, enabling us to estimate the ideologies of videos not posted on Reddit. These predicted ideologies are then validated against human labels. Finally, we demonstrate the utility of this method by applying it to the watch histories of survey respondents with self-identified ideologies to evaluate the prevalence of echo chambers on YouTube. Our approach gives video-level scores based only on supplied text metadata, is scalable, and can be easily adjusted to account for changes in the ideological climate. This method could also be generalized to estimate the ideology of other items referenced or posted on Reddit.

    Date Posted

    Feb 13, 2024

  • Working Paper

    Concept-Guided Chain-of-Thought Prompting for Pairwise Comparison Scaling of Texts with Large Language Models

    Working Paper, October 2023

    View Article View abstract

    Existing text scaling methods often require a large corpus, struggle with short texts, or require labeled data. We develop a text scaling method that leverages the pattern recognition capabilities of generative large language models (LLMs). Specifically, we propose concept-guided chain-of-thought (CGCoT), which uses prompts designed to summarize ideas and identify target parties in texts to generate concept-specific breakdowns, in many ways similar to guidance for human coder content analysis. CGCoT effectively shifts pairwise text comparisons from a reasoning problem to a pattern recognition problem. We then pairwise compare concept-specific breakdowns using an LLM. We use the results of these pairwise comparisons to estimate a scale using the Bradley-Terry model. We use this approach to scale affective speech on Twitter. Our measures correlate more strongly with human judgments than alternative approaches like Wordfish. Besides a small set of pilot data to develop the CGCoT prompts, our measures require no additional labeled data and produce binary predictions comparable to a RoBERTa-Large model fine-tuned on thousands of human-labeled tweets. We demonstrate how combining substantive knowledge with LLMs can create state-of-the-art measures of abstract concepts.

    Date Posted

    Oct 18, 2023

  • Working Paper

    Large Language Models Can Be Used to Estimate the Latent Positions of Politicians

    Working Paper, September 2023

    View Article View abstract

    Existing approaches to estimating politicians' latent positions along specific dimensions often fail when relevant data is limited. We leverage the embedded knowledge in generative large language models (LLMs) to address this challenge and measure lawmakers' positions along specific political or policy dimensions. We prompt an instruction/dialogue-tuned LLM to pairwise compare lawmakers and then scale the resulting graph using the Bradley-Terry model. We estimate novel measures of U.S. senators' positions on liberal-conservative ideology, gun control, and abortion. Our liberal-conservative scale, used to validate LLM-driven scaling, strongly correlates with existing measures and offsets interpretive gaps, suggesting LLMs synthesize relevant data from internet and digitized media rather than memorizing existing measures. Our gun control and abortion measures -- the first of their kind -- differ from the liberal-conservative scale in face-valid ways and predict interest group ratings and legislator votes better than ideology alone. Our findings suggest LLMs hold promise for solving complex social science measurement problems.

  • Journal Article

    Measuring the Ideology of Audiences for Web Links and Domains Using Differentially Private Engagement Data

    Proceedings of the International AAAI Conference on Web and Social Media, 2023

    View Article View abstract

    Area of Study

    Date Posted

    Jun 02, 2023

  • Journal Article

    Dictionary-Assisted Supervised Contrastive Learning

    Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2022

    View Article View abstract

    Text analysis in the social sciences often involves using specialized dictionaries to reason with abstract concepts, such as perceptions about the economy or abuse on social media. These dictionaries allow researchers to impart domain knowledge and note subtle usages of words relating to a concept(s) of interest. We introduce the dictionary-assisted supervised contrastive learning (DASCL) objective, allowing researchers to leverage specialized dictionaries when fine-tuning pretrained language models. The text is first keyword simplified: a common, fixed token replaces any word in the corpus that appears in the dictionary(ies) relevant to the concept of interest. During fine-tuning, a supervised contrastive objective draws closer the embeddings of the original and keyword-simplified texts of the same class while pushing further apart the embeddings of different classes. The keyword-simplified texts of the same class are more textually similar than their original text counterparts, which additionally draws the embeddings of the same class closer together. Combining DASCL and cross-entropy improves classification performance metrics in few-shot learning settings and social science applications compared to using cross-entropy alone and alternative contrastive and data augmentation methods.

    Area of Study

    Date Posted

    Oct 27, 2022

  • Working Paper

    Network Embedding Methods for Large Networks in Political Science

    Working Paper, November 2021

    View Article View abstract

    Social networks play an important role in many political science studies. With the rise of social media, these networks have grown in both size and complexity. Analysis of these large networks requires generation of feature representations that can be used in machine learning models. One way to generate these feature representations is to use network embedding methods for learning low-dimensional feature representations of nodes and edges in a network. While there is some literature comparing the advantages and shortcomings of these models, to our knowledge, there has not been any analysis on the applicability of network embedding models to classification tasks in political science. In this paper, we compare the performance of five prominent network embedding methods on prediction of ideology of Twitter users and ideology of Internet domains. We find that LINE provides the best feature representation across all 4 datasets that we use, resulting in the highest performance accuracy. Finally, we provide the guidelines for researchers on the use of these models for their own research.

    Area of Study

    Date Posted

    Nov 12, 2021

    Tags

  • Journal Article

    Accessibility and Generalizability: Are Social Media Effects Moderated by Age or Digital Literacy?

    Research & Politics, 2021

    View Article View abstract

    An emerging empirical regularity suggests that older people use and respond to social media very differently than younger people. Older people are the fastest-growing population of Internet and social media users in the U.S., and this heterogeneity will soon become central to online politics. However, many important experiments in this field have been conducted on online samples that do not contain enough older people to be useful to generalize to the current population of Internet users; this issue is more pronounced for studies that are even a few years old. In this paper, we report the results of replicating two experiments involving social media (specifically, Facebook) conducted on one such sample lacking older users (Amazon’s Mechanical Turk) using a source of online subjects which does contain sufficient variation in subject age. We add a standard battery of questions designed to explicitly measure digital literacy. We find evidence of significant treatment effect heterogeneity in subject age and digital literacy in the replication of one of the two experiments. This result is an example of limitations to generalizability of research conducted on samples where selection is related to treatment effect heterogeneity; specifically, this result indicates that Mechanical Turk should not be used to recruit subjects when researchers suspect treatment effect heterogeneity in age or digital literacy, as we argue should be the case for research on digital media effects.

    Area of Study

    Date Posted

    Jun 09, 2021

  • Journal Article

    YouTube Recommendations and Effects on Sharing Across Online Social Platforms

    Proceedings of the ACM on Human-Computer Interaction, 2021

    View Article View abstract

    In January 2019, YouTube announced it would exclude potentially harmful content from video recommendations but allow such videos to remain on the platform. While this step intends to reduce YouTube's role in propagating such content, continued availability of these videos in other online spaces makes it unclear whether this compromise actually reduces their spread. To assess this impact, we apply interrupted time series models to measure whether different types of YouTube sharing in Twitter and Reddit changed significantly in the eight months around YouTube's announcement. We evaluate video sharing across three curated sets of potentially harmful, anti-social content: a set of conspiracy videos that have been shown to experience reduced recommendations in YouTube, a larger set of videos posted by conspiracy-oriented channels, and a set of videos posted by alternative influence network (AIN) channels. As a control, we also evaluate effects on video sharing in a dataset of videos from mainstream news channels. Results show conspiracy-labeled and AIN videos that have evidence of YouTube's de-recommendation experience a significant decreasing trend in sharing on both Twitter and Reddit. For videos from conspiracy-oriented channels, however, we see no significant effect in Twitter but find a significant increase in the level of conspiracy-channel sharing in Reddit. For mainstream news sharing, we actually see an increase in trend on both platforms, suggesting YouTube's suppressing particular content types has a targeted effect. This work finds evidence that reducing exposure to anti-social videos within YouTube, without deletion, has potential pro-social, cross-platform effects. At the same time, increases in the level of conspiracy-channel sharing raise concerns about content producers' responses to these changes, and platform transparency is needed to evaluate these effects further.

    Date Posted

    Apr 22, 2021

  • Working Paper

    News Sharing on Social Media: Mapping the Ideology of News Media Content, Citizens, and Politicians

    Working Paper, November 2020

    View Article View abstract

    This article examines the news sharing behavior of politicians and ordinary users by mapping the ideological sharing space of political information on social media. As data, we use the near-universal currency of online political information exchange: URLs (i.e. web links). We introduce a methodological approach (and statistical software) that unifies the measurement of political ideology online, using social media sharing data to jointly estimate the ideology of: (1) politicians; (2) social media users, and (3) the news sources that they share online. Second, we validate the measure by comparing it to well-known measures of roll call voting behavior for members of congress. Third, we show empirically that legislators who represent less competitive districts are more likely to share politically polarizing news than legislators with similar voting records in more competitive districts. Finally, we demonstrate that it is nevertheless not politicians, but ordinary users who share the most ideologically extreme content and contribute most to the polarized online news-sharing ecosystem. Our approach opens up many avenues for research into the communication strategies of elites, citizens, and other actors who seek to influence political behavior and sway public opinion by sharing political information online.

  • Working Paper

    A Comparison of Methods in Political Science Text Classification: Transfer Learning Language Models for Politics

    Working Paper, October 2020

    View Article View abstract

    Automated text classification has rapidly become an important tool for political analysis. Recent advancements in NLP enabled by advances in deep learning now achieve state of the art results in many standard tasks for the field. However, these methods require large amounts of both computing power and text data to learn the characteristics of the language, resources which are not always accessible to political scientists. One solution is a transfer learning approach, where knowledge learned in one area or source task is transferred to another area or a target task. A class of models that embody this approach are language models, which demonstrate extremely high levels of performance. We investigate the performance of these models in the political science by comparing multiple text classification methods. We find RoBERTa and XLNet, language models that rely on theTransformer, require fewer computing resources and less training data to perform on par with – or outperform – several political science text classification methods. Moreover, we find that the increase in accuracy is especially significant in the case of small labeled data, highlighting the potential for reducing the data-labeling cost of supervised methods for political scientists via the use of pretrained language models.

    Area of Study

    Date Posted

    Oct 20, 2020

  • Book
  • Journal Article

    Content-Based Features Predict Social Media Influence Operations

    Science Advances, 2020

    View Article View abstract

    We study how easy it is to distinguish influence operations from organic social media activity by assessing the performance of a platform-agnostic machine learning approach. Our method uses public activity to detect content that is part of coordinated influence operations based on human-interpretable features derived solely from content. We test this method on publicly available Twitter data on Chinese, Russian, and Venezuelan troll activity targeting the United States, as well as the Reddit dataset of Russian influence efforts. To assess how well content-based features distinguish these influence operations from random samples of general and political American users, we train and test classifiers on a monthly basis for each campaign across five prediction tasks. Content-based features perform well across period, country, platform, and prediction task. Industrialized production of influence campaign content leaves a distinctive signal in user-generated content that allows tracking of campaigns from month to month and across different accounts.

    Date Posted

    Jul 22, 2020

  • Journal Article

    Automated Text Classification of News Articles: A Practical Guide

    Political Analysis, 2021

    View Article View abstract

    Automated text analysis methods have made possible the classification of large corpora of text by measures such as topic and tone. Here, we provide a guide to help researchers navigate the consequential decisions they need to make before any measure can be produced from the text. We consider, both theoretically and empirically, the effects of such choices using as a running example efforts to measure the tone of New York Times coverage of the economy. We show that two reasonable approaches to corpus selection yield radically different corpora and we advocate for the use of keyword searches rather than predefined subject categories provided by news archives. We demonstrate the benefits of coding using article segments instead of sentences as units of analysis. We show that, given a fixed number of codings, it is better to increase the number of unique documents coded rather than the number of coders for each document. Finally, we find that supervised machine learning algorithms outperform dictionaries on a number of criteria. Overall, we intend this guide to serve as a reminder to analysts that thoughtfulness and human validation are key to text-as-data methods, particularly in an age when it is all too easy to computationally classify texts without attending to the methodological choices therein.

    Area of Study

    Date Posted

    Jun 09, 2020

  • Journal Article

    For Whom the Bot Tolls: A Neural Networks Approach to Measuring Political Orientation of Twitter Bots in Russia

    SAGE Open, 2019

    View Article View abstract

    Computational propaganda and the use of automated accounts in social media have recently become the focus of public attention, with alleged Russian government activities abroad provoking particularly widespread interest. However, even in the Russian domestic context, where anecdotal evidence of state activity online goes back almost a decade, no public systematic attempt has been made to dissect the population of Russian social media bots by their political orientation. We address this gap by developing a deep neural network classifier that separates pro-regime, anti-regime, and neutral Russian Twitter bots. Our method relies on supervised machine learning and a new large set of labeled accounts, rather than externally obtained account affiliations or orientation of elites. We also illustrate the use of our method by applying it to bots operating in Russian political Twitter from 2015 to 2017 and show that both pro- and anti-Kremlin bots had a substantial presence on Twitter.

    Date Posted

    Apr 12, 2019

    Tags

  • Journal Article
  • Journal Article

    Detecting Bots on Russian Political Twitter

    Big Data, 2017

    View Article View abstract

    Automated and semiautomated Twitter accounts, bots, have recently gained significant public attention due to their potential interference in the political realm. In this study, we develop a methodology for detecting bots on Twitter using an ensemble of classifiers and apply it to study bot activity within political discussions in the Russian Twittersphere. We focus on the interval from February 2014 to December 2015, an especially consequential period in Russian politics. Among accounts actively Tweeting about Russian politics, we find that on the majority of days, the proportion of Tweets produced by bots exceeds 50%. We reveal bot characteristics that distinguish them from humans in this corpus, and find that the software platform used for Tweeting is among the best predictors of bots. Finally, we find suggestive evidence that one prominent activity that bots were involved in on Russian political Twitter is the spread of news stories and promotion of media who produce them.

    Date Posted

    Dec 01, 2017

    Tags

  • Book

    Measuring Public Opinion with Social Media Data

    The Oxford Handbook of Polling and Survey Methods, 2018

    View Book View abstract

    This chapter examines the use of social networking sites such as Twitter in measuring public opinion. It first considers the opportunities and challenges that are involved in conducting public opinion surveys using social media data. Three challenges are discussed: identifying political opinion, representativeness of social media users, and aggregating from individual responses to public opinion. The chapter outlines some of the strategies for overcoming these challenges and proceeds by highlighting some of the novel uses for social media that have fewer direct analogs in traditional survey work. Finally, it suggests new directions for a research agenda in using social media for public opinion work.

    Date Posted

    Oct 01, 2017