Despite Warning Labels, Trump’s Election Misinformation Tweets Spread Widely Across Social Media Platforms, New Study Finds

August 24, 2021  ·   News

The paper’s findings reveal how misinformation spreads across networks and point to need to improve content-moderation techniques.

Donald Trump speaking at a podium.

Credit: Wikimedia

Before and after the 2020 presidential campaign, Twitter flagged hundreds of Donald Trump’s tweets as election misinformation, either attaching a warning label or blocking engagement with the tweet entirely.

Although blocking engagement effectively limited their spread, messages with warning labels spread further and longer on Twitter than those without labels, according to a new study. Moreover, the blocked Twitter messages were posted more often and received more visibility than other messages on Facebook, Instagram, and Reddit.

The paper, authored by researchers at New York University’s Center for Social Media and Politics (CSMaP), appears in the Harvard Kennedy School Misinformation Review.

“These data cannot tell us whether Twitter’s warning labels worked or not,” says Zeve Sanderson, co-author of the paper and executive director of CSMaP. “It’s possible Twitter intervened on posts that were more likely to spread, or it’s possible Twitter’s interventions caused a backlash and increased their spread.” 

“Nonetheless, the findings underscore how intervening on one platform has limited impact when content can easily spread on others,” added paper co-author and research scientist Megan A. Brown. “To more effectively counteract misinformation on social media, it’s important for both technologists and public officials to consider broader content moderation policies that can work across social platforms rather than singular platforms.”

For the study, the team identified 1,149 political tweets from then-President Donald Trump posted from November 1, 2020 through January 8, 2021. Of these, 303 received a “soft intervention” from Twitter (they were labeled as disputed and potentially misleading, but the platform did not remove or block them from spreading), 16 received a “hard intervention” (they were labeled with a warning message and blocked from engagement), and 830 received no intervention. The authors also identified these same messages on Facebook, Instagram, and Reddit and collected data from those platforms, where they may not have limited the posts due to differing content moderation policies.

The research yielded two findings:

  1. While hard interventions limited the further spread of those messages on Twitter, tweets that received a soft intervention actually spread further than messages that received no intervention at all.

  2. Messages that received hard interventions on Twitter spread longer and further on Facebook, Instagram, and Reddit than messages that received either soft or no interventions on Twitter.

In addition, the authors note, this study was possible because data from these platforms has been made publicly available, either through the platforms themselves, third-party tools, or other researchers.

“Research on social media’s impact on society has made tremendous strides in the last decade. But our work has often been hampered by a lack of platform transparency and access to the necessary data,” observes NYU Professor Joshua A. Tucker, a co-author of the study and co-director of CSMaP. “Increasing data access is critical to measuring the ecosystem-level impact of content moderation and producing rigorous research that can inform evidence-based public and platform policy.”

The paper’s other authors included CSMaP’s other co-directors, Jonathan Nagler, a professor in NYU’s Department of Politics, and Richard Bonneau, a professor in NYU’s Department of Biology and Courant Institute of Mathematical Sciences.

Cross-posted at NYU.