CSMaP Responds to FTC Inquiry into Tech Censorship

March 25, 2025  ·   Policy

Our experts show what research reveals about content moderation and why some users feel unfairly targeted.

Outside of the Federal Trade Commission building.

Credit: ajay_suresh

The Federal Trade Commission launched an inquiry into alleged bias from social media platforms, issuing a request for public comment on how consumers may have been harmed. Tech Policy Press reached out to leading scholars from across the country to provide added context. Here's our response:

While social media platforms disproportionately moderate posts from conservative users and sources, most research would suggest that this effect is likely driven by the asymmetric production of misinformation. More specifically, studies have shown that conservative users are more likely to post, share, and be exposed to misinformation and that political elites contribute to this dynamic. From the extant literature, it’s clear that the ideological asymmetry in moderation (disproportionately impacting conservative content) could be reflective of the ideological asymmetry in misinformation dissemination (disproportionately driven by conservative users).

However, for many observers, this literature may not settle the question. A key challenge for platforms is reflected in research: how to define misinformation and operationalize that definition in practice. Generally, misinformation is determined at the article level by fact-checkers, or at the source level by lists of known fake or low-quality news websites. For many conservatives, this approach may undermine the validity of the referenced research. Put another way, if there is bias in the classification of misinformation by fact-checkers, researchers, or media organizations, then the lack of bias found in moderation practices could simply reflect initial biases in content classification.

In this context, crowd-sourced moderation systems—especially ones, such as X’s Community Notes, that are designed to find agreement among diverse users—may be more convincing. A recent study of X’s community moderation system finds that content from conservative users is flagged as containing misinformation more often than content from liberal users. Given the nature of the crowdsourced evaluation, we wouldn’t expect there to be potential biases that could have impacted other studies. As a result, this finding suggests that perceptions of asymmetry in misinformation moderation may reflect asymmetries in misinformation production.