A look at our top articles, events, and more from the past year.
Area of Study
The nearly two decade relationship between social media and politics has been defined by complexities, contradictions, and crises — dynamics brought into stark relief in 2021.
With the world locked down, social media helped people connect with family and friends; learn information about Covid-19; and engage with social, political, and environmental issues. And yet, these same platforms also enabled the spread of misinformation and hate speech, and the coordination of anti-democratic and anti-vaccination movements. For many policymakers across the globe, the status quo has become untenable, setting the stage for significant legislative and regulatory actions.
It’s against this backdrop that rigorous, transparent, and independent research proves most urgent and powerful. Our mission is animated by the belief that a precondition for effective public policy and informed public discourse is a deep and nuanced understanding of the issues. At NYU’s Center for Social Media and Politics, our research aims to provide evidence as to how these new technologies impact politics, policy, and democracy.
To study this new online media environment requires robust data infrastructure, diverse substantive expertise, and methodological innovation. Our community is composed of brilliant faculty, postdocs, students, engineers, and staff committed to producing research that can inform policy and push the boundaries of academic study. We are deeply grateful for their dedication and collaboration, and to our community of funders and internal partners whose support makes our work possible. See below for a short overview of our research and impact in 2021.
CSMaP’s primary focus is producing rigorous academic research and advancing scientific knowledge in public discourse. In 2021, we published 13 peer-reviewed journal articles and seven articles in the popular press making important methodological and substantive contributions.
Here’s a selection of our research from the past year:
Short of Suspension: How Suspension Warnings Can Reduce Hate Speech on Twitter (Perspectives on Politics): There has been ongoing debate about the effectiveness of suspending or banning abusive social media users. But we know little about whether warning a user they might be suspended can reduce hate speech. In our experiment, the users who received warnings reduced the ratio of tweets containing hateful language by up to 10 percent a week later. Read a summary.
Moderating with the Mob: Evaluating the Efficacy of Real-Time Crowdsourced Fact-Checking (Journal of Online Trust and Safety): Misinformation spreads rapidly on social media, before professional fact checkers and media outlets have a chance to debunk false claims. Social media companies have suggested using ordinary users to assess the veracity of news articles. But ordinary users — and machine learning models based on information from those users — cannot effectively identify false and misleading news in real time, compared to professional fact checkers, according to our paper. Read a summary.
Twitter amplifies conservative politicians. Is it because users mock them? (The Washington Post): A fall 2021 Twitter report showed its algorithms were more likely to amplify tweets from right-leaning politicians. CSMaP research suggested an unlikely but plausible reason why: Because they get dunked on so much. Our data showed that conservative politicians are more likely than their peers to — in Twitter slang — be ratioed, which often indicates the tweet was unpopular, or a bad take that many Twitter users mock.
Twitter flagged Donald Trump’s tweets with election misinformation: They continued to spread both on and off the platform (Harvard Kennedy School Misinformation Review): Twitter flagged hundreds of Donald Trump’s tweets as election misinformation, either attaching a warning label or blocking engagement entirely. Our research team found those tweets continued to spread widely. In fact, messages blocked on Twitter spread further and longer on Facebook, Reddit, and Instagram, emphasizing the importance of studying content moderation at the ecosystem level. Read a summary.
Testing the effects of Facebook usage in an ethnically polarized setting (Proceedings of the National Academy of Sciences [PNAS]): Many believe social media increases societal polarization. But in our Facebook deactivation experiment in Bosnia and Herzegovina, we showed that going off of Facebook makes people in ethnically homogeneous offline environments more ethnically polarized. Read a summary.
Trumping Hate on Twitter? Online Hate Speech in the 2016 U.S. Election Campaign and its Aftermath (Quarterly Journal of Political Science): By developing novel methods for classifying hate speech, we found that — counter to a popular narrative advanced at the time — there was no systematic increase in hate speech on Twitter over the course of the 2016 campaign and its aftermath. Read a summary.
Beyond academia, our team also focused on increasing the public impact of the Center’s work, through policy engagement, events, and more.
The Facebook Papers
Attention around social media’s impact on society was at an all-time high in 2021, thanks in large part to disclosures from the Facebook Papers, which offered a chilling account of Facebook’s role in the diffusion of misinformation, the prevalence of hate speech, and the coordination of the Capitol insurrection. They also highlighted the tremendous social cost of siloed information: Facebook had a monopoly on relevant data to study the effects of its platforms and kept the internal research hidden.
CSMaP experts contributed to this debate throughout the fall, calling on the documents to be shared with academic researchers and for policymakers to require more transparency from social media platforms.
“With access to these documents, scholars could support the media, public, and policymakers in identifying where Facebook’s internal research is conclusive, what inferences can be drawn, which topics require more evidence and future research, and what that research should be,” wrote Zeve Sanderson, Jonathan Nagler, and Joshua A. Tucker at Slate’s Future Tense.
“It’s critical for the government, through legislation or regulation, to require social media platforms to be more transparent and open up more data to outside researchers,” urged Nagler and Tucker in the New York Daily News. “In May, Reps. Lori Trahan and Kathy Castor introduced the Social Media DATA Act, which would deliver some of the information we need to study the impact of digital ads. The bill is an important first step to help ensure scholars can produce research that benefits the public, enables evidence-based policymaking, and supports our democratic institutions. With President Biden’s support, regulatory agencies, like the FCC or FTC, could go further by requiring even more platform transparency.”
Writing for Brookings, Tucker and Stanford’s Nathaniel Persily explained the three essential characteristics robust data access legislation should have: First, a federal agency must be empowered to compel large internet platforms to share data. Second, that agency should vet researchers and projects. Third, data should reside in the firm, and regulations should specify how to access data and publish results in a way that would not endanger user privacy. The proposal was based on Persily’s Platform Accountability and Transparency Act, which a bipartisan group of senators introduced in December.
In 2021, CSMaP directors presented at more than 25 external events, ranging from academic workshops to public facing lectures. Internally, we also ran a number of events for both public and academic audiences.
Reducing Harm on Social Media: In December, we convened an interdisciplinary group of academic, policy, and tech experts to discuss various approaches and interventions to make social media a safer and more civil place. The group discussed the global context of online harm, the impact of business incentives on platform behavior, new research and tools to reduce online harassment, and more.
Second CSMaP Annual Conference: In April, we convened 50 leading scholars for a two-day virtual conference to present research on a range of topics at the intersection of social media and politics, with particular focus paid to the relationship between social media and polarization. Experts presented several interesting papers, including “Breaking the Social Media Prism: How to Make Our Platforms Less Polarizing” by Duke’s Christopher Bail and “Legislative Communications and Power: Measuring Leadership from Social Media Data” from Caltech’s Michael Alvarez.
Lessons Learned from 2020: The 2020 election posed widespread challenges for voters, election officials, and cybersecurity experts who worked to administer and protect the electoral process. In April, CSMaP hosted “Lessons Learned: Navigating a Presidential Election Through a Pandemic” — a workshop to explore how our country can continually improve our electoral process to be resilient in the most trying of times, while simultaneously advancing basic scientific research on all aspects of the voting process. Sponsored by the National Science Foundation, the event convened experts from a range of disciplines, including law, political science, and computer science.
The scale and scope of our data collections are key to our innovative research agenda. These collections are overseen by our talented team of research engineers, and are enabled by the state-of-the-art NYU high performance computing cluster.
Over the past year, we have both welcomed new researchers and supported others as they transitioned into the next stages of their careers.
In September, we welcomed Patrick Wu as a postdoctoral fellow, and Hannah Waight will join us in February 2022. They join Kevin Aslett and Zhanna Terechshenko, who are finishing the third and final year of their fellowship this academic year, and James Bisbee and Maggie Macdonald. Together, our postdocs serve as the Center’s core research engine, representing methodological, substantive, and disciplinary diversity.
This past academic year, we also fully supported three Center for Data Science PhD students, and offered research fellowships to four politics PhD students. Collectively they examined some of the biggest questions at the intersection of social media and politics.
In what was a challenging academic job market, three of our postdoctoral fellows received and accepted tenure track roles: In the fall, Haohan Chen joined the University of Hong Kong as an Assistant Professor of Politics and Public Administration, and Tom Paskhalis is an Assistant Professor in Political Science and Data Science at Trinity College Dublin. James Bisbee will be an Assistant Professor in Political Science and Data Science at Vanderbilt University starting in fall 2022.
In the News
Here’s a selection of some of the best stories citing our work and researchers in 2021:
The New York Times: YouTube’s stronger election misinformation policies had a spillover effect on Twitter and Facebook, researchers say.
USA Today: Twitter blocked and labeled Donald Trump’s tweets on election fraud. They spread anyway.
The Washington Post: Which Republicans are most likely to think the election was stolen? Those who dislike Democrats and don’t mind white nationalists
Protocol: Can Twitter warnings actually curb hate speech? A new study says yes.
Tech Policy Press: A Modest Ox: Examining Two Approaches to Testing Crowdsourced Fact Checking
Popular Science: A look inside TikTok’s seemingly all-knowing algorithm
Gifts and grants fund everything we do, from undertaking ambitious research to building out a talented team of researchers. Please consider supporting our work.