Political Polarization
As the U.S. becomes increasingly politically polarized, many blame social media platforms for incentivizing outrage and escalating division. Our experts explore ways to quantify polarization and examine its impact on society.
Academic Research
-
Working Paper
Polarization by Default: Auditing Recommendation Bias in LLM-Based Content Curation
Working Paper, 2026
Large Language Models (LLMs) are increasingly deployed to curate and rank human-created content, yet the nature and structure of their biases in these tasks remains poorly understood: which biases are robust across providers and platforms, and which can be mitigated through prompt design. We present a controlled simulation study mapping content selection biases across three major LLM providers (OpenAI, Anthropic, Google) on real social media datasets from Twitter/X, Bluesky, and Reddit, using six prompting strategies (\textit{general}, \textit{popular}, \textit{engaging}, \textit{informative}, \textit{controversial}, \textit{neutral}). Through 540,000 simulated top-10 selections from pools of 100 posts across 54 experimental conditions, we find that biases differ substantially in how structural and how prompt-sensitive they are. Polarization is amplified across all configurations, toxicity handling shows a strong inversion between engagement- and information-focused prompts, and sentiment biases are predominantly negative. Provider comparisons reveal distinct trade-offs: GPT-4o Mini shows the most consistent behavior across prompts; Claude and Gemini exhibit high adaptivity in toxicity handling; Gemini shows the strongest negative sentiment preference. On Twitter/X, where author demographics can be inferred from profile bios, political leaning bias is the clearest demographic signal: left-leaning authors are systematically over-represented despite right-leaning authors forming the pool plurality in the dataset, and this pattern largely persists across prompts.
-
Journal Article
Misinformation Beyond Traditional Feeds: Evidence from a WhatsApp Deactivation Experiment in Brazil
The Journal of Politics, 2025
In most advanced democracies, concerns about the spread of misinformation are typically associated with feed-based social media platforms like Twitter and Facebook. These platforms also account for the vast majority of research on the topic. However, in most of the world, particularly in Global South countries, misinformation often reaches citizens through social media messaging apps, particularly WhatsApp. To fill the resulting gap in the literature, we conducted a multimedia deactivation experiment to test the impact of reducing exposure to potential sources of misinformation on WhatsApp during the weeks leading up to the 2022 Presidential election in Brazil. We find that this intervention significantly reduced participants’ recall of false rumors circulating widely during the election. However, consistent with theories of mass media minimal effects, a short-term change in the information environment did not lead to significant changes in belief accuracy, political polarization, or well-being.
Reports & Analysis
-
Analysis
Reducing Exposure To Misinformation: Evidence from WhatsApp in Brazil
Deactivating multimedia on WhatsApp in Brazil consistently reduced exposure to online misinformation during the pre-election weeks in 2022, but did not impact whether false news was believed, or reduce polarization.
August 16, 2024
-
Analysis
Which Republicans Are Most Likely to Think the Election Was Stolen?
Those who dislike Democrats and don’t mind white nationalists. That includes plenty of Republicans with college educations.
January 19, 2021
News & Commentary
-
Commentary
Gen Z Is More Progressive Than Millennials, Except in One Crucial Way
The most progressive generation ever? It’s complicated.
January 9, 2026
-
Commentary
The Joe Rogan of the left, right, and center is just … Joe Rogan
A new analysis of podcasts shows that Rogan isn't as MAGA as you think.
December 18, 2025