Misinformation & Information
In the digital age, information and misinformation spreads rapidly on social media. CSMaP experts study how we consume and share news online and the impact of misinformation on our democracy.
Research
-
Working Paper
Echo Chambers, Rabbit Holes, and Algorithmic Bias: How YouTube Recommends Content to Real Users
Working Paper, May 2022
To what extent does the YouTube recommendation algorithm push users into echo chambers, ideologically biased content, or rabbit holes? Despite growing popular concern, recent work suggests that the recommendation algorithm is not pushing users into these echo chambers. However, existing research relies heavily on the use of anonymous data collection that does not account for the personalized nature of the recommendation algorithm. We asked a sample of real users to install a browser extension that downloaded the list of videos they were recommended. We instructed these users to start on an assigned video and then click through 20 sets of recommendations, capturing what they were being shown in real time as they used the platform logged into their real accounts. Using a novel method to estimate the ideology of a YouTube video, we demonstrate that the YouTube recommendation algorithm does, in fact, push real users into mild ideological echo chambers where, by the end of the data collection task, liberals and conservatives received different distributions of recommendations from each other, though this difference is small. While we find evidence that this difference increases the longer the user followed the recommendation algorithm, we do not find evidence that many go down `rabbit holes' that lead them to ideologically extreme content. Finally, we find that YouTube pushes all users, regardless of ideology, towards moderately conservative and an increasingly narrow range of ideological content the longer they follow YouTube's recommendations.
-
Journal Article
News Credibility Labels Have Limited Average Effects on News Diet Quality and Fail to Reduce Misperceptions
Science Advances, 2022
As the primary arena for viral misinformation shifts toward transnational threats, the search continues for scalable countermeasures compatible with principles of transparency and free expression. We conducted a randomized field experiment evaluating the impact of source credibility labels embedded in users’ social feeds and search results pages. By combining representative surveys (n = 3337) and digital trace data (n = 968) from a subset of respondents, we provide a rare ecologically valid test of such an intervention on both attitudes and behavior. On average across the sample, we are unable to detect changes in real-world consumption of news from low-quality sources after 3 weeks. We can also rule out small effects on perceived accuracy of popular misinformation spread about the Black Lives Matter movement and coronavirus disease 2019. However, we present suggestive evidence of a substantively meaningful increase in news diet quality among the heaviest consumers of misinformation. We discuss the implications of our findings for scholars and practitioners.
News & Views
-
News
Does Presenting Credibility Labels of Journalistic Sources Affect News Consumption? New Study Finds Limited Effects
On average, source credibility labels don’t change whether someone reads low-quality news sources — but it does appear to improve the news diet quality of the heaviest consumers of misinformation.
May 6, 2022
-
Commentary
The Social Media Data We Need to Answer Key Research Questions
Ahead of a Senate Judiciary Subcommittee hearing on platform transparency, we submitted a letter outlining the type of research questions we want to answer — and the social media data we need to answer them.
May 4, 2022