Twitter/X
Academic Research
-
Working Paper
Polarization by Default: Auditing Recommendation Bias in LLM-Based Content Curation
Working Paper, 2026
Large Language Models (LLMs) are increasingly deployed to curate and rank human-created content, yet the nature and structure of their biases in these tasks remains poorly understood: which biases are robust across providers and platforms, and which can be mitigated through prompt design. We present a controlled simulation study mapping content selection biases across three major LLM providers (OpenAI, Anthropic, Google) on real social media datasets from Twitter/X, Bluesky, and Reddit, using six prompting strategies (\textit{general}, \textit{popular}, \textit{engaging}, \textit{informative}, \textit{controversial}, \textit{neutral}). Through 540,000 simulated top-10 selections from pools of 100 posts across 54 experimental conditions, we find that biases differ substantially in how structural and how prompt-sensitive they are. Polarization is amplified across all configurations, toxicity handling shows a strong inversion between engagement- and information-focused prompts, and sentiment biases are predominantly negative. Provider comparisons reveal distinct trade-offs: GPT-4o Mini shows the most consistent behavior across prompts; Claude and Gemini exhibit high adaptivity in toxicity handling; Gemini shows the strongest negative sentiment preference. On Twitter/X, where author demographics can be inferred from profile bios, political leaning bias is the clearest demographic signal: left-leaning authors are systematically over-represented despite right-leaning authors forming the pool plurality in the dataset, and this pattern largely persists across prompts.
-
Working Paper
The Partisan Effects of Social Media Bans
Working Paper, March 2026
Reports & Analysis
-
Analysis
Latinos Who Use Spanish-Language Social Media Get More Misinformation
That could affect their votes — and their safety from covid-19.
November 8, 2022
-
Analysis
Gender-Based Online Violence Spikes After Prominent Media Attacks
Our research finds that after a prominent male media personality targets a female journalist, the prevalence of hateful speech targeting those journalists increases in the immediate aftermath, often taking days to decrease.
January 26, 2022
News & Commentary
-
Policy
Comments on Ofcom’s Call for Evidence on Researcher Access
We responded to Ofcom’s public request for evidence on researcher access to online service data for safety research, highlighting barriers researchers face when accessing social media data, the challenges of limited information sharing, potential ways to improve data access, and examples of robust data-sharing practices.
July 26, 2025
-
News
How Do State Lawmakers Decide What to Prioritize?
New research reveals state legislators respond to both local constituents and national politicians when setting their policy agendas, with stronger influence from politically engaged partisans within their states.
April 21, 2025