A Conversation About Reducing Harm on Social Media

December 20, 2021  ·   News

Recap of our recent event with academic, policy, and tech experts on how to make social media a safer and more civil place.

A flyer for the event on Reducing Harm that has small headshots of all the panelists.

Credit: CSMaP

When social media platforms first launched nearly two decades ago, they were seen as a force for good — a way to connect with family and friends, learn and explore new ideas, and engage with social and political movements. Yet, as the Facebook Papers and other research have documented, these same platforms have become vectors of misinformation, hate speech, and polarization.

With attention around social media’s impact on society at an all-time high, NYU’s Center for Social Media and Politics (CSMaP) last week gathered an interdisciplinary group of experts to discuss various approaches and interventions to make social media a safer and more civil place. 

Moderated by Jane Lytvynenko, a Senior Research Fellow at the Technology and Social Change Project at Harvard Kennedy School’s Shorenstein Center on Media, Politics and Public Policy, the discussion featured researchers and practitioners from across the academic, policy, and tech communities: 

  • Niousha Roshani, Deputy Director of the Content Policy & Society Lab at Stanford University’s Program on Democracy and the Internet; 

  • Rebekah Tromble, Director of the Institute for Data, Democracy & Politics at George Washington University; 

  • Joshua A. Tucker, Co-Director of CSMaP and a Professor of Politics at NYU; and 

  • Sahar Massachi, Co-Founder and Executive Director of the Integrity Institute. 

The group discussed the global context of online harm, the impact of business incentives on platform behavior, and new research and tools to reduce online harassment.

 
A lot of the discussion about misinformation and online harm is centered on the U.S. and Europe, said Niousha Roshani. But a large majority of users — the “global majority” — live outside of western democracies, where false information on platforms can fuel violent uprisings and impact social stability in conflict areas, as we’ve seen in Ethiopia and elsewhere. Roshani’s Content Policy & Society Lab is working to mitigate the harm of disruptive technologies and platforms, whose policies are often created without considering the challenges posed by differences in culture, language, and communities. Her team is prototyping a multi-stakeholder model, which brings together civil society, private companies, and governments to identify and implement solutions to the main challenges of moderating content online while respecting fundamental human rights.

Rebekah Tromble highlighted another multi-stakeholder approach to address online harm. As trust in our information ecosystem continues to plummet, the experts who are best equipped to counter misperceptions and increase trust — such as journalists, scientists, and public health officials — have become targets of coordinated harassment campaigns. Following violent threats and demeaning attacks, these experts receive little support — from bystanders, their employers, or the platforms — and often move their voices offline, leaving the trolls to claim victory and perpetuate the culture of abuse. “Expert Voices Together,” Tromble’s new NSF-funded initiative, aims to provide real-time support to these experts. Working with specialists from academia, media, and civil society, the team is building an intake tool for those experiencing harassment; a rapid response system and monitoring mechanism to inform platform design choices; and public education tools to help others intervene. The ultimate goal is for those most impacted by these harms to have a voice in the platform design changes to address them.

Joshua A. Tucker emphasized the need to rigorously test any new measures to reduce online harms prior to implementation. He presented findings from three recent CSMaP experiments to counter misinformation and hate speech.

  1. People are often told to “do your own research” before believing anything online. Our first study found that searching online about false news stories actually made people more likely to believe them. These findings were driven by search results yielding low-quality news sources and by respondents with low digital literacy.

  2. If the quality of news sites is a problem, what happens if we tell people about their quality? In a second study, we discovered that labeling news source quality had no measurable effect on news consumption habits overall — except among those who consumed the most low-quality news.

  3. Finally, relating to Roshani and Tromble’s discussion about abuse on social media, the third study found that warning users they might be suspended for using hate speech can temporarily reduce their hateful language.

As a former Facebook employee, Sahar Massachi stressed how the organizational dynamics inside social media companies influence their products. For example, to increase profit, Facebook optimizes for metrics like growth and engagement, which often tend to fuel harmful content. Although platforms have integrity workers to help mitigate these harms, the focus on engagement often undercuts their efforts. Only by changing the incentives, he said, can we change how social media companies approach harm on their platforms. Massachi co-founded the Integrity Institute to build a community of integrity workers to support the public, policymakers, academics, journalists, and social media companies themselves as they try to solve the problems posed by social media.

Following their introductory remarks, panelists then discussed two overarching questions — how to create better collaborative environments between academics and industry, and what government policy solutions seem most promising?

On the first question, Tromble highlighted Facebook’s 2020 U.S. Election project, co-led by Tucker, which is a collaboration between the platform and independent researchers to investigate Facebook’s impact on the American elections last year. The most important factor is ensuring any research is made public, she said, not just because the information needs to get out to the public, but also because it creates a mechanism for accountability. Tucker agreed, noting the risk when the fruits of scientific innovation are limited to those inside the companies.

On the second question, Tromble is most hopeful about policies in the European landscape that focus on risk assessment and harm mitigation, and do so in a broad-based way that is ultimately grounded in core democratic values. In addition, Massachi emphasized that we’ve successfully regulated other consumer products that are harmful for society, and we can learn from these areas, whether it’s pollution or child safety, to deploy policies we know can work.