Media Consumption

Social media has altered the way we consume and interact with different forms of media. CSMaP experts analyze the real-world implications of our online consumption, and how it impacts the political landscape.

Academic Research

  • Working Paper

    Polarization by Default: Auditing Recommendation Bias in LLM-Based Content Curation

    Working Paper, 2026

    View Article View abstract

    Large Language Models (LLMs) are increasingly deployed to curate and rank human-created content, yet the nature and structure of their biases in these tasks remains poorly understood: which biases are robust across providers and platforms, and which can be mitigated through prompt design. We present a controlled simulation study mapping content selection biases across three major LLM providers (OpenAI, Anthropic, Google) on real social media datasets from Twitter/X, Bluesky, and Reddit, using six prompting strategies (\textit{general}, \textit{popular}, \textit{engaging}, \textit{informative}, \textit{controversial}, \textit{neutral}). Through 540,000 simulated top-10 selections from pools of 100 posts across 54 experimental conditions, we find that biases differ substantially in how structural and how prompt-sensitive they are. Polarization is amplified across all configurations, toxicity handling shows a strong inversion between engagement- and information-focused prompts, and sentiment biases are predominantly negative. Provider comparisons reveal distinct trade-offs: GPT-4o Mini shows the most consistent behavior across prompts; Claude and Gemini exhibit high adaptivity in toxicity handling; Gemini shows the strongest negative sentiment preference. On Twitter/X, where author demographics can be inferred from profile bios, political leaning bias is the clearest demographic signal: left-leaning authors are systematically over-represented despite right-leaning authors forming the pool plurality in the dataset, and this pattern largely persists across prompts.

  • Working Paper

    AI summaries in social media improve dialogue but reduce engagement

    • Michael Heseltine, 
    • Christopher A. Bail, 
    • Petter Tornberg, 
    • Michelle Schimmel, 
    • Christopher Barrie

    Working Paper, 2026

    View Article View abstract

    Generative artificial intelligence agents are becoming increasingly active participants in conversations on social media platforms, yet little is known about how they shape public discussion of social problems. We present two preregistered online experiments testing AI-generated summaries in simulated, interactive social media environments. AI summaries increased the quality of user comments, without systematically increasing toxicity or negative affect. At the same time, AI exposure reduced engagement with conversation threads. AI summaries also increased the semantic similarity between user comments and the AI-generated summaries, suggesting that these systems function as informational anchors that shape discussion. Together, the findings reveal a tradeoff: AI-generated summaries can improve conversation quality while narrowing conversational engagement and channeling how users articulate political arguments. These results speak to growing concerns about how embedded AI systems fundamentally alter platform dynamics and shape public discourse.

View All Related Research

Reports & Analysis

View All Related Reports & Analysis

News & Commentary

View All Related News