Tech, Law & Security Program

Addressing Harmful Content Online

There has been significant attention to the role of certain large content-curators (Facebook, Twitter, YouTube, Tiktok) in addressing hateful and other harmful speech online. Meanwhile, much less attention has been payed to the range of other actors in the Internet stack – all of whom have varying degree of power to cut off or deny key services to undesirable actors in the system. This was laid bare when, in the wake of the El Paso shooting, Cloudflare stopped providing services to 8chan, the messaging service that was used to disseminate the shooters’ manifesto – thus essentially forcing the site temporarily off-line.  This follows on Cloudflare’s earlier decision, a few years prior, to cut off services to The Daily Stormer, a self-proclaimed neo-Nazi, white supremacist groups. And in 2010, web hosts and payment processors joined forces to temporarily force Wikileaks offline, in the wake of U.S. government pressure after the publication of a stash of stolen U.S. Department of Defense documents. 

These are hard choices to make: At what point and according to what standards should companies deny services to individuals, entities, or sites? What are the follow-on consequences of doing so? What kind of process should be put in place for both making these decisions and for educating and informing users? And what are the full range of tools and levers that can be used to respond to harmful content online? These are the key questions that this project will analyze and answer.

Conversations with a range of companies indicate that players across the Internet ecosystem are actively struggling with these questions. This project will bring together disparate actors in this space, help to develop common guiding principles, and educate both the general public and policy makers alike about the roles and responsibilities of the players up and down the Internet stack in identifying and responding to harmful content online—including a robust assessment of the speech, privacy, and security-related considerations at the varying layers of the stack.

Sponsors and Partners:

ADL
 
Craig Newmark Philantropies
 
Knight Foundation logo
 
Content Governance in the Shadows: How Telcos & Other Infrastructure Companies "Moderate" Online Content

Content Governance in the Shadows: How Telcos & Other Infrastructure Companies "Moderate" Online Content

April 2023

This paper addresses significant policy challenges in online content governance activities by non-application layer internet infrastructure companies. In addition to exploring the nuances of the challenges, the paper makes recommendations for telcos to improve transparency about their practices and for how all non-application layer companies can consider substantive content governance principles.

Read more
Toward Greater Content Moderation Transparency Reporting

Toward Greater Content Moderation Transparency Reporting

October 2022

As freedom of expression increasingly takes place online, the question of how to balance it with the need to moderate harmful content online has become more and more prevalent. Society faces vexing questions about which part of the online information ecosystem is best positioned, if at all, to manage and moderate malicious content. This article serves as part of TLS's answer to the question of whether content moderation should be conducted in the multilayered system that makes up the internet, rather than on the large social platforms.

Read more
The Lack of Content Moderation Transparency: The Cloudflare and Kiwi Farms Example

The Lack of Content Moderation Transparency: The Cloudflare and Kiwi Farms Example

September 2022

Cloudflare, a large company providing internet security services, has withdrawn services from Kiwi Farms, an online forum. This article explore this change and its relationship to general content moderation trends among internet infrastructure companies.

Read more
Widening the Lens on Content Moderation - Mapping the Ecosystem of Online Content Dissemination

Widening the Lens on Content Moderation - Mapping the Ecosystem of Online Content Dissemination

July 2021

As high-profile social media platforms face increasing scrutiny over their content moderation activities, the majority of the internet is often left out of the conversation about how to effectively combat harmful content online while also protecting fundamental rights and civil liberties. This report engages in that larger conversation by discussing the full internet ecosystem’s role in generating, curating, and disseminating online content.

Read more
Sign up to receive event information and program news.

Sign me up!