Jonathan Nagler
Related Research & News
-
Commentary
White House OSTP Comments on AI
In response to the Biden-Harris Administration's public request for information on mitigating the risks of AI, we submitted comments highlighting the importance of transparent standards for identifying and labeling AI generated content online.
July 7, 2023
-
Working Paper
WhatsApp Increases Exposure to False Rumors but has Limited Effects on Beliefs and Polarization: Evidence from a Multimedia-Constrained Deactivation.
Working Paper, May 2023
-
Commentary
Feedback on EU Article 40
In response to the European Commission's Digital Services Act, we submitted comments highlighting the importance of data access for independent research and suggested standards for data access mechanisms.
May 23, 2023
-
Working Paper
Large Language Models Can Be Used to Scale the Ideologies of Politicians in a Zero-Shot Learning Setting
Working Paper, March 2023
-
Journal Article
Exposure to the Russian Internet Research Agency Foreign Influence Campaign on Twitter in the 2016 US Election and Its Relationship to Attitudes and Voting Behavior
Nature Communications, 2023
-
Commentary
Musk’s Twitter Shake-Up Could Deliver a Critical Blow to Social Media Research
We still don’t know the extent of what Musk has actually changed within Twitter. But without mandated data access for researchers, we risk never knowing their impact on society as well.
November 9, 2022
-
Analysis
Latinos Who Use Spanish-Language Social Media Get More Misinformation
That could affect their votes — and their safety from covid-19.
November 8, 2022
-
Working Paper
Social Media, Information, and Politics: Insights on Latinos in the U.S.
Working Paper, November 2022
-
Journal Article
Dictionary-Assisted Supervised Contrastive Learning
Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2022
-
Journal Article
Using Social Media Data to Reveal Patterns of Policy Engagement in State Legislatures
State Politics & Policy Quarterly, 2022
-
Analysis
Echo Chambers, Rabbit Holes, and Ideological Bias: How YouTube Recommends Content to Real Users
We find that YouTube’s recommendation algorithm does not lead the vast majority of users down extremist rabbit holes, although it does push users into increasingly narrow ideological ranges of content in what we might call evidence of a (very) mild ideological echo chamber.
October 13, 2022
-
Journal Article
Most Users Do Not Follow Political Elites on Twitter; Those Who Do, Show Overwhelming Preferences for Ideological Congruity.
Science Advances, 2022
-
Journal Article
Election Fraud, YouTube, and Public Perception of the Legitimacy of President Biden
Journal of Online Trust and Safety, 2022
-
Journal Article
What We Learned About The Gateway Pundit from its Own Web Traffic Data
Workshop Proceedings of the 16th International AAAI Conference on Web and Social Media, 2022
-
Working Paper
Echo Chambers, Rabbit Holes, and Algorithmic Bias: How YouTube Recommends Content to Real Users
Working Paper, May 2022
-
Journal Article
News Credibility Labels Have Limited Average Effects on News Diet Quality and Fail to Reduce Misperceptions
Science Advances, 2022
-
Commentary
The Social Media Data We Need to Answer Key Research Questions
Ahead of a Senate Judiciary Subcommittee hearing on platform transparency, we submitted a letter outlining the type of research questions we want to answer — and the social media data we need to answer them.
May 4, 2022
-
Working Paper
Estimating the Ideology of Political YouTube Videos
Working Paper, May 2022
-
Working Paper
To Moderate, Or Not to Moderate: Strategic Domain Sharing by Congressional Campaigns
Working Paper, April 2022
-
Journal Article
What’s Not to Like? Facebook Page Likes Reveal Limited Polarization in Lifestyle Preferences
Political Communication, 2021