New Study: Can Ordinary Users Effectively Fact Check Fake News in Real Time?

October 28, 2021  ·   News

Social media companies have suggested using ordinary users to assess the veracity of news articles and combat misinformation, but a new paper finds this is likely not a viable solution.

Photograph of a laptop users where the screen reads “Fake News”.

Credit: Pixabay

Misinformation spreads rapidly on social media, before professional fact checkers and media outlets have a chance to debunk false claims. Social media companies, such as Facebook and Twitter, have suggested using ordinary users to assess the veracity of news articles.

But ordinary users — and machine learning models based on information from those users — cannot effectively identify false and misleading news in real time, compared to professional fact checkers, according to a new paper from New York University’s Center for Social Media and Politics (CSMaP), published today in the inaugural issue of the Journal of Online Trust and Safety.

“The idea of using ordinary users as fact checkers is appealing. It takes decisions out of the hands of the big, powerful platforms; it is scalable; and it can be more inclusive and representative. Unfortunately, it wasn’t effective in our experiment,” said William Godel, a doctoral candidate in NYU’s Department of Politics and lead author of the study.

The paper collected data to compare how well average Americans — and machine learning models based on data from those Americans — could assess the accuracy of news compared to Professional Fact Checkers (PFCs).

Although machine learning based models perform significantly better than simply aggregating the evaluations of groups of ordinary users, the study found, neither approach is able to perform at the level of PFCs. In addition, both models perform best when only using the evaluations from those with high political knowledge, suggesting reason for caution for crowdsourcing models that seek to rely on a representative sample of the population.

Methodology

In the study, our researchers collected a sample of three articles each day from suspect news sources using a pre-registered algorithm, which was designed to ensure an ideological balance and a balance of false and true content. The three articles were sent to a sample of 90 Americans and a panel of six PFCs, who evaluated the veracity of each article within 24 hours. (The non-PFCs also answered questions to measure their political knowledge.) In total, the dataset consisted of 12,883 respondent evaluations across 135 articles.

To measure the performance of multiple crowdsourced approaches to fact checking, we aggregated individual survey responses of the same article to construct multiple “crowds” ranging from one to 25 individuals. How the crowd evaluated that article — i.e., how we combined the evaluations in the crowd — was then determined by a variety of methods and compared to the PFC evaluations of that article.

Our Findings

Using this data, our researchers evaluated how well PFCs accurately assessed the veracity of articles — and compared the results to ordinary users, users with high political knowledge, and machine-learning models based on input from each type of user. Here’s what we found:

  1. No approach based on simple aggregations (e.g. the modal choice) of evaluations of users — either ordinary users or those with high political knowledge — yielded particularly high accuracy relative to Professional Fact Checkers.

  2. Using machine learning algorithms typically improves performance compared to simply aggregating the evaluations of users — but this approach is still not able to perform at the level of PFCs. 

  3. Both models performed best when only using the evaluations from those with high political knowledge.

“Our research finds little evidence that a real time crowd-based fact checking approach, on its own, is sufficient to identify false news,” said Joshua A. Tucker, co-director of CSMaP and a co-author of the study. “Although crowdsourced fact checking does provide genuine information for which social media platforms or others may find potential uses, this study suggests it must be part of a far larger toolkit to combat the spread of online misinformation.”