Twitter Flagged Donald Trump’s Tweets with Election Misinformation: They Continued to Spread Both On and Off the Platform

Despite warning labels, Trump’s election misinformation tweets continued to spread across social media, underscoring the importance of considering content moderation at the ecosystem level.

Abstract

We analyze the spread of Donald Trump’s tweets that were flagged by Twitter using two intervention strategies—attaching a warning label and blocking engagement with the tweet entirely. We find that while blocking engagement on certain tweets limited their diffusion, messages we examined with warning labels spread further on Twitter than those without labels. Additionally, the messages that had been blocked on Twitter remained popular on Facebook, Instagram, and Reddit, being posted more often and garnering more visibility than messages that had either been labeled by Twitter or received no intervention at all. Taken together, our results emphasize the importance of considering content moderation at the ecosystem level.

Background

Much like four years ago, evidence suggests that throughout the 2020 U.S. elections, misinformation related to the presidential campaign circulated widely on and offline. One of the most notable developments was the public commitment by social media platforms, including Facebook, Instagram, and Twitter, to address misinformation in the lead up to and aftermath of November 3, utilizing measures ranging from providing context labels to posts, halting the sharing of posts, and removing posts altogether that contained or linked to election-related misinformation. There is mixed evidence regarding the effectiveness of these types of interventions, and we have limited empirical understanding of the relationship between platform intervention strategies and actual user behavior.

Study

Here, we focus on the impact of Twitter’s interventions on the diffusion of election-related messaging from former President Donald Trump. To do so, we analyze the spread of Trump’s tweets from November 1, 2020 to January 8, 2021 that contain election-related misinformation. These tweets were ones that were flagged by Twitter using two intervention strategies — attaching a warning label and blocking engagement with the tweet entirely. We then collect data from Twitter and measure the differential spread of messages that were not flagged against those that were flagged by a warning label or prevented from being engaged with. To understand the impact of one platform’s intervention on their broader spread, we also identify these same messages on Facebook, Instagram, and Reddit and collect data from those platforms.

Results

We find that while blocking engagement on certain tweets limited their diffusion, messages we examined with warning labels spread further on Twitter than those without labels. Additionally, the messages that had been blocked on Twitter remained popular on Facebook, Instagram, and Reddit, being posted more often and garnering more visibility than messages that had either been labeled by Twitter or received no intervention at all. Our findings underscore the networked nature of misinformation: posts or messages banned on one platform may grow on other mainstream platforms in the form of links, quotes, or screenshots. This study emphasizes the importance of researching content moderation at the ecosystem level, adding new evidence to a growing public and platform policy debate around implementing effective interventions to counteract misinformation.