In Progress Research

CSMaP’s primary output is academic research. While peer-review helps ensure the rigor of academic work, it also creates a delay in research getting out into the wider world. Given the speed at which the digital information environment is changing, we created this page to highlight the topic areas we’re currently researching. 

If you’re interested in learning more about this ongoing research, please feel free to reach us at csmap@nyu.edu

Generative AI

We’re currently exploring multiple lines of research related to new AI technologies. 

  • First, we’re analyzing how real people use generative AI and to what effect. We launched a new archive project to track how political operatives up and down the ballot use AI in campaigns, are running experiments to understand the effects of labeling images as AI generated, and studying the impact of using AI chatbots for fact-checking political news.

  • Second, we’re exploring how political biases are embedded in language models themselves — contributing to a relatively underexplored area in the important field of inquiry examining how biases impact model outputs. 

  • Third, we’re leveraging recent innovations in large language models to augment how we study politics. In a recent working paper, we used ChatGPT to successfully estimate political ideology. We’re now exploring how to use these tools to better measure new concepts, such as political sectarianism, at scale. 

Emerging Platforms

The social media landscape is no longer dominated by a few large, legacy platforms (i.e. Twitter, Facebook, Instagram). In the past few years, we’ve seen the rise of broadcast-style entertainment apps, private messaging apps, and other niche platforms. At the same time, access to social media data continues to shrink, making it harder for academics to understand the social media ecosystem. 

  • We’re setting up new research infrastructure to explore these online environments, including projects collecting data to understand how politics is talked about on YouTube, WhatsApp, and right-wing platforms (Gab, Gettr, and Rumble).  

  • We are also one of the founding consortium members of the Accelerator infrastructure project, which is designed to power policy-relevant research by building shared infrastructure to support data collection, analysis, and tool development in an effort to better understand today’s information environment.

Beyond the United States

Much of the research about social media focuses on the United States and other Western democracies, and on the platforms popular in those countries. This leaves us with little knowledge about the rest of the world, where citizens interact with social networks in many different ways. We have two major research projects studying this environment. 

  • First is a global deactivation study. In more than 20 countries simultaneously, we’re examining the causal effects of social media on a variety of factors, including political polarization, news knowledge, belief in misinformation, and social well-being. 

  • Second, we are running a similar project focused solely on WhatsApp in India and South Africa, which builds off a recent experiment of ours studying WhatsApp in Brazil.

Propaganda

Studying foreign influence campaigns has been a key part of our research agenda for more than a decade. We have several ongoing projects looking specifically at the role of propaganda online. 

  • First, we’re examining how propaganda from authoritarian regimes shows up in LLM training data, which in turn can shape the output for these models, making it more favorable to their cause. 

  • Second, we’re developing automated tools to detect and analyze “narrative diffusion” — where content produced by malign actors is reused by other online news sources. As the supply of digital information grows, amplifying information becomes more crucial than producing it. However, most methods for analysis aren’t able to capture this amplifying effect. While these tools are language agnostic and can be used to study a range of context, in an ongoing project we specifically focus on the impact of Russian state-owned media on U.S. media coverage of the Russo-Ukraine war. 

  • Third, plenty of research focuses on algorithmic ranking on U.S.-based social media platforms. We’re studying whether authoritarian regimes, specifically China, use algorithmic ranking to their own advantage, specifically by upranking state sources.

Interventions

In addition to understanding how information spreads online, and the effect that information has on political attitudes and beliefs, we also explore what interventions might make a difference to mitigate the harms associated with digital technologies. In one new project, which replicates a recent study of ours about Google search, we’re researching how using ChatGPT to evaluate online news impacts belief in the veracity of news — and misinformation. In another, we’re examining whether exposing people to news from the other political party can affect a user’s feelings toward that party and its policies.

Extremism

One major takeaway from the past decade of social media research is that while most people do not consume misinformation, a small minority consume a lot of it, which can contribute to increasing political extremism. In one recent study, posted as a working paper, we find that YouTube’s algorithm doesn’t lead the vast majority of people down extremist rabbit holes. In another, we are seeking to understand the factors related to the support of white nationalist ideology in the United States, including the role of online behavior.