Shortly after 10 a.m. on Saturday, an unsigned document was posted to 8chan, a site that calls itself “the darkest reaches of the Internet.” Its author appeared to be a twenty-one-year-old white man from near Dallas, Texas, who had just driven about nine hours to the border city of El Paso. According to the document, the young man was inspired by the mass shooting in Christchurch, New Zealand, in March. In Texas, his goal was to murder as many Hispanic civilians as possible. Carrying an AK-47-style assault rifle, he killed twenty people and wounded twenty-six others. Apparently, he intended to spark a race war—or, rather, to accelerate a race war that he already believed to be in progress. “Do your part and spread this brothers!” he wrote on 8chan. “Keep up the good fight.”
White-supremacist terrorism is nothing new, but this sickeningly specific instantiation of it—lone shooter, assault rifle, online manifesto, a link to a live stream—appears to be contagious. Thirteen hours after the massacre in El Paso, a shooter opened fire outside a bar in Dayton, Ohio. It’s not yet clear what his motive was, but eyewitnesses described him as a white man in his twenties. On Saturday, 8chan users passed around a Google spreadsheet listing some of these shootings: date, location, number of kills. Anders Breivik and Dylann Roof, who are commonly referred to on 8chan as “martyrs,” were at the top of the list. The El Paso shooter was at the bottom. It’s hard to imagine that this list won’t keep growing.
The national conversation will now turn, as it should, to gun control, to mental illness, and to the President’s practice of exacerbating racial tensions, which has been one of his avocations for decades and now appears to be his central reëlection strategy. But there’s also a more specific question: what can be done about the fact that so many of these terrorists—in Pittsburgh, in Poway, in Christchurch, in El Paso—seem to find inspiration in the same online spaces? Each killer, in the moment, may have acted alone, but they all appear to have been zealous converts to the same ideology—a paranoid snarl of raw anger, radical nationalism, unhinged nihilism, and fears of “white genocide” that is still referred to as “fringe,” although it’s creeping precariously close to the mainstream. On many social networks that bill themselves as bulwarks of “free speech,” including Gab, 4chan, and 8chan, this way of thinking is so dominant that it is often taken for granted. In April, the Anti-Defamation League wrote that such platforms “serve as round-the-clock white supremacist rallies.”
Past New Yorker coverage of mass shootings and the battle over gun control.
Can these platforms simply be shut down? The answer is complicated, but basically binary: there is a lot that private companies can do to censor speech, but much less that the government can do. The United States has some of the most expansive free-speech protections in the world. There are exceptions, such as incitement to violence, but the bar to prove incitement is quite high. In the nineteen-sixties, a Ku Klux Klan leader was arrested under an Ohio law that prohibited advocating “crime, sabotage, violence, or unlawful methods of terrorism.” But the Supreme Court—at the time, arguably the most liberal Court in American history—ruled unanimously that the Klan leader’s First Amendment rights had been violated. This case, Brandenburg v. Ohio, laid out the standard that still applies: the government can only censor speech if it is “directed to inciting or producing imminent lawless action” and “likely to incite or produce such action.” Some of the recent white-supremacist manifestos might meet this standard, and yet, even if one or several of them could be banned, this wouldn’t necessarily apply to the platforms on which the manifestos are posted. According to Section 230 of the Communications Decency Act, “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In other words, a site’s owner is not liable for the content on that site. In March, the Washington Post reported that the owner of 8chan, Jim Watkins, an American living in the Philippines, had “built a technical fortress to guard 8chan from potential takedowns.” The site’s founder, Fredrick Brennan, who no longer works at the company, told the Post that Watkins was “content to lose money” on the site: “8chan is like a boat to Jim.” A boat may be full of snakes, or explosives, but this doesn’t guarantee that the government will be able to seize it.
Still, even in the absence of government intervention, there are other pressures that can be brought to bear. In August of 2017, a few days after the white-supremacist demonstrations in Charlottesville, Virginia, a Web-security company called Cloudflare severed its relationship with a neo-Nazi propaganda site called the Daily Stormer. Without Cloudflare’s protection, the Daily Stormer would be vulnerable to denial-of-service attacks, allowing online vigilantes to crash the site more or less at will. “Our terms of service reserve the right for us to terminate users of our network at our sole discretion,” Matthew Prince, the C.E.O. of Cloudflare, wrote on the company’s blog. “Now, having made that decision, let me explain why it’s so dangerous. . . . Without a clear framework as a guide for content regulation, a small number of companies will largely determine what can and cannot be online.”
Cloudflare still offers its protective services to 8chan. “We’re the Fedex of the internet, passing messages on, not looking inside the boxes,” Cloudflare’s head of policy told Forbes in March. But, after Cloudflare had made an exception to this rule by banning the Daily Stormer, this was no longer entirely true. “It’s powerful to be able to say you’ve never done something,” Prince wrote in his blog post. “And, after today, make no mistake, it will be a little bit harder for us to argue against a government somewhere pressuring us into taking down a site they don’t like.”