Data Science Methodology
Our experts produce new methodologies to further understand how social media affects politics and democracy. From developing and deploying code, CSMaP researchers create new ways to quantify social media interactions and its effects.
Academic Research
-
Working Paper
Artificial Intelligence, Politics, and Political Science
Working Paper, 2026
This forthcoming edited volume (Cambridge University Press) examines the transformative impact of artificial intelligence on democratic institutions, political behavior, governance, and the discipline of political science itself. The volume represents the report of the American Political Science Association’s Presidential Task Force on AI, Politics, and Political Science, co-chaired by Joshua Tucker and Nathaniel Persily.
Across twelve chapters produced by close to 60 scholars, the report evaluates how generative AI and machine learning systems are reshaping public opinion formation, political communication, labor markets, electoral processes, state capacity, and regulatory frameworks. The authors analyze both the opportunities and risks posed by AI technologies, including concerns surrounding information integrity, ideological personalization, surveillance, democratic accountability, and concentrated technological power. Themes that cut across multiple chapters include: the unprecedented power of a small number of AI corporations; the opacity and non-replicability of model outputs; bias in AI systems; and the absence of agreed-upon benchmarks for evaluation.The volume also addresses methodological and ethical implications for political science research, emphasizing transparency, reproducibility, and the responsible integration of AI tools into scholarly inquiry. Ultimately, the volume argues that AI will not only alter political institutions and citizen-state relations, but also may fundamentally reshape how political knowledge is produced and interpreted. It calls for sustained interdisciplinary collaboration and evidence-based governance to ensure that AI development supports democratic resilience rather than undermining it.
-
Working Paper
Synthetic personas distort the structure of human belief systems
Working Paper, 2026
Large language models (LLMs) are increasingly used as synthetic survey respondents, yet it is unclear whether their belief-system structure matches that of real publics. We compare 28 LLMs to the 2024 General Social Survey (GSS) using 52 attitude items and demographic persona traits. We estimate polychoric correlation matrices and propagate un-certainty in the GSS via bootstrap resampling with multiple imputation. Constraint is measured by the variance share explained by the first principal component and by effective dependence, a determinant-based measure of global linear dependence. Across models, LLM personas exhibit substantially higher constraint than humans; conditioning on persona traits reduces constraint far more for LLMs, indicating greater demographic mediation. Projection onto a shared GSS basis further shows overemphasis of the leading dimension and missing secondary structure. These results caution against treating LLM personas as a reliable foundation for synthetic survey data generation.
Reports & Analysis
-
Analysis
Are Influence Campaigns Trolling Your Social Media Feeds?
Now, there are ways to find out. New data shows that machine learning can identify content created by online political influence operations.
October 13, 2020
News & Commentary
-
News
How Will AI Reshape Politics? New Volume Co-Edited by CSMAP’s Joshua Tucker Explores the Stakes
The Presidential Task Force on AI, Politics, and Political Science of the American Political Science Association presents a timely, wide-ranging analysis of what AI may mean for politics and for the field of political science.
May 11, 2026
-
Commentary
Was there censorship on TikTok after the U.S. takeover?
A TikTok outage more likely explains recent anomalies – there’s no evidence of larger platform changes so far.
February 4, 2026