- Home  /  
- Research  /  
- Academic Research  /  
- Moderating with the Mob: Evaluating the Efficacy of Real-Time Crowdsourced Fact-Checking
Moderating with the Mob: Evaluating the Efficacy of Real-Time Crowdsourced Fact-Checking
Social media companies have suggested using ordinary users to assess the veracity of news articles and combat misinformation. But ordinary users cannot effectively identify false and misleading news in real time, compared to professional fact checkers.
Citation
Godel, William, Zeve Sanderson, Kevin Aslett, Jonathan Nagler, Richard Bonneau, Nathaniel Persily, and Joshua A. Tucker. “Moderating with the Mob: Evaluating the Efficacy of Real-Time Crowdsourced Fact-Checking.” Journal of Online Trust and Safety 1, no. 1 (2021). https://doi.org/10.54501/jots.v1i1.15
Date Posted
Oct 28, 2021
Authors
Area of Study
Tags
Abstract
Reducing the spread of false news remains a challenge for social media platforms, as the current strategy of using third-party fact- checkers lacks the capacity to address both the scale and speed of misinformation diffusion. Research on the “wisdom of the crowds” suggests one possible solution: aggregating the evaluations of ordinary users to assess the veracity of information. In this study, we investigate the effectiveness of a scalable model for real-time crowdsourced fact-checking. We select 135 popular news stories and have them evaluated by both ordinary individuals and professional fact-checkers within 72 hours of publication, producing 12,883 individual evaluations. Although we find that machine learning-based models using the crowd perform better at identifying false news than simple aggregation rules, our results suggest that neither approach is able to perform at the level of professional fact-checkers. Additionally, both methods perform best when using evaluations only from survey respondents with high political knowledge, suggesting reason for caution for crowdsourced models that rely on a representative sample of the population. Overall, our analyses reveal that while crowd-based systems provide some information on news quality, they are nonetheless limited—and have significant variation—in their ability to identify false news.
Background
Reducing the spread of false news remains a challenge for social media platforms, as the current strategy of using third-party fact-checkers lacks the capacity to address both the scale and speed of misinformation spread online. The volume of news, both true and false, is so great that simply classifying false or misleading articles quickly poses an enormous challenge. Twitter and Facebook suggested the possibility of crowdsourcing fact-checking by using groups of ordinary users to add information to either corroborate, or correct, a tweet or post. The real-time effectiveness of this method is unknown. We use this study to investigate how effective crowd-source based fact-checking can be in real time.
Study
First, we had 135 articles evaluated by participants within 24 hours of their publication — too soon for third party fact-checkers to correct the content online. We then compared how well crowds of these ordinary users were able to identify misinformation against professional fact-checkers (PFCs), as well as crowds combined with machine-learning. We chose articles from distinct low-quality news sources from three different political leanings (i.e., left-leaning, right-leaning, and no clear lean).
Results
Our results suggest that neither crowd-sourcing, nor crowds combined with machine-learning, is able to perform at the level of professional fact-checkers. We also find that methods that utilize the most representative crowds, where all respondents are chosen at random, are inferior to systems that limit crowds to those with greater political knowledge. While crowd-based systems provide some information on news quality, they are nonetheless limited—and have significant variation—in their ability to identify false news. Our results suggest that a crowdsourced approach is unlikely to be able to replace a journalistic fact-checking system, but it’s possible that crowdsourcing could be used in a larger pipeline of fact-checking.