Research
CSMaP is a leading academic research institute studying the ever-shifting online environment at scale. We publish peer-reviewed research in top academic journals, produce rigorous reports and analyses on policy relevant topics, and develop open source tools and methods to support the broader scholarly community.
Academic Research
-
Working Paper
Artificial Intelligence, Politics, and Political Science
Working Paper, 2026
This forthcoming edited volume (Cambridge University Press) examines the transformative impact of artificial intelligence on democratic institutions, political behavior, governance, and the discipline of political science itself. The volume represents the report of the American Political Science Association’s Presidential Task Force on AI, Politics, and Political Science, co-chaired by Joshua Tucker and Nathaniel Persily.
Across twelve chapters produced by close to 60 scholars, the report evaluates how generative AI and machine learning systems are reshaping public opinion formation, political communication, labor markets, electoral processes, state capacity, and regulatory frameworks. The authors analyze both the opportunities and risks posed by AI technologies, including concerns surrounding information integrity, ideological personalization, surveillance, democratic accountability, and concentrated technological power. Themes that cut across multiple chapters include: the unprecedented power of a small number of AI corporations; the opacity and non-replicability of model outputs; bias in AI systems; and the absence of agreed-upon benchmarks for evaluation.The volume also addresses methodological and ethical implications for political science research, emphasizing transparency, reproducibility, and the responsible integration of AI tools into scholarly inquiry. Ultimately, the volume argues that AI will not only alter political institutions and citizen-state relations, but also may fundamentally reshape how political knowledge is produced and interpreted. It calls for sustained interdisciplinary collaboration and evidence-based governance to ensure that AI development supports democratic resilience rather than undermining it.
-
Working Paper
Polarization by Default: Auditing Recommendation Bias in LLM-Based Content Curation
Working Paper, 2026
Large Language Models (LLMs) are increasingly deployed to curate and rank human-created content, yet the nature and structure of their biases in these tasks remains poorly understood: which biases are robust across providers and platforms, and which can be mitigated through prompt design. We present a controlled simulation study mapping content selection biases across three major LLM providers (OpenAI, Anthropic, Google) on real social media datasets from Twitter/X, Bluesky, and Reddit, using six prompting strategies (\textit{general}, \textit{popular}, \textit{engaging}, \textit{informative}, \textit{controversial}, \textit{neutral}). Through 540,000 simulated top-10 selections from pools of 100 posts across 54 experimental conditions, we find that biases differ substantially in how structural and how prompt-sensitive they are. Polarization is amplified across all configurations, toxicity handling shows a strong inversion between engagement- and information-focused prompts, and sentiment biases are predominantly negative. Provider comparisons reveal distinct trade-offs: GPT-4o Mini shows the most consistent behavior across prompts; Claude and Gemini exhibit high adaptivity in toxicity handling; Gemini shows the strongest negative sentiment preference. On Twitter/X, where author demographics can be inferred from profile bios, political leaning bias is the clearest demographic signal: left-leaning authors are systematically over-represented despite right-leaning authors forming the pool plurality in the dataset, and this pattern largely persists across prompts.
Reports & Analysis
-
Report
Research Coordination Network: Democracy in the Networked Era
The Digital Information Environment & Global Elections
September 23, 2025
-
Analysis
Who Has a Policy that Would Benefit You? More Voters Say Trump.
National survey data from the 2016, 2020, and 2024 elections shed light on how candidates' campaign strategies impact voter policy recall.
November 2, 2024
Data Collections & Tools
As part of our project to construct comprehensive data sets and to empirically test hypotheses related to social media and politics, we have developed a suite of open-source tools and modeling processes.