Cyber Mobs, Disinformation, and Death Videos: The Internet as It Is (and as It Should Be)

The following is a preview of an upcoming article in The Michigan Law Review (118 Mich. L. Rev. (forthcoming 2019).)

When done well, fiction and visual representations alter our understanding of the human experience in a visceral way. They enable us to bear witness to suffering. Nick Drnaso’s graphic novel Sabrina does that in spades. It provides a powerful snapshot of online norms. The picture is not pretty.

In Sabrina, a young woman goes missing. We learn that the woman’s neighbor, a misogynist loner, killed her and recorded the murder. Online, people clamor for the video. The execution video soon leaks and goes viral. A conspiracy theorist with a popular radio show claims the murder is a hoax. He gins up a cyber mob to “investigate” what is really going on. A cyber mob descends, smearing the woman’s loved ones as crisis actors, sending death threats, and posting their personal information. The attacks continue until a shooting massacre redirects the cyber mob’s wrath. The mob does not skip a beat. It continues to exact its pound of flesh, but this time from other mourners.

The novel’s contrast of quiet introspection offline to the loud negativity online allows readers to feel how jarring and destabilizing a cyber-mob attack can be. One minute, people are safely and anonymously proceeding with the minutiae of daily life. The next, they are caught in a blinding glare of a cyber mob’s attention. They are exposed, maligned, and scared. Sabrina helps us appreciate what it is like to be in the vortex of a cyber-mob attack.

Sabrina captures the breathtaking velocity of disinformation and conspiracy theories online and the rapid escalation to threats and violence. Every day, people are radicalized online to wreak havoc. On August 3, 2019, in El Paso, Texas, a twenty-one-year old man posted a racist manifesto on 8chan and then walked into a Wal Mart with a powerful rifle, killing 20 people and injuring many others. The killer trafficked in and engaged with others in hateful conspiracy theories.

Drnaso’s novel invites a conversation about human behavior, culture, and law in the digital age. Right now, it is cheap and easy to wreak havoc online and for that havoc to go viral. We like, click, and share grotesque execution videos, conspiracy theories, and destructive falsehoods without thinking. We have always been drawn to information that resonates with us, especially the provocative and negative, but the online environment seems to supercharge human biases.

Cyber-mob attacks inflict profound harm. Targeted individuals fundamentally change their lives. They move. They change their names because it is impossible to obtain employment, find love, and meet clients when one’s Google search is filled with threats, falsehoods, and privacy invasions. They lose their jobs and have difficulty finding new ones. They experience profound emotional distress, anxiety, and depression. They shut down their social media profiles, blogs, and websites, because keeping them invites more abuse. Viral conspiracy theories and falsehoods undermine our sense of a shared reality. This is a perilous time for the pursuit of truth. Even the President of the United States cries “fake news” and propagates fringe theories on his official Presidential Twitter account. Things are poised to take a turn for the worse with the emergence of deep-fake technology.

Platforms structure and shape online activity, so what are they doing about online abuse? Tech companies act rationally—some might say responsibly to their shareholders—when they tolerate abuse because it generates advertising revenue and costs them nothing in legal liability. As Mary Anne Franks explains, platforms have “little incentive to stop [online abuse], and in some cases are incentivized to ignore or aggravate [it].”

Platforms are best situated to minimize the damage of online abuse. Through their design choices and speech policies and procedures, platforms control what content appears on their services. And yet thanks to the broad judicial interpretation of a federal law passed in 1996, tech companies are largely immune from liability for their users’ illegality.

Combatting cyber-mob attacks must be a priority. Law should raise the cost of cyber-mob attacks. It is time for tech companies to redress some of the negative externalities of their business model. As Benjamin Wittes and I have argued, platforms should not enjoy immunity from liability for user-generated content unless they have earned that immunity with reasonable content moderation practices.

Unlike a few years ago when the notion of doing anything about Section 230 was viewed as madness, fixing Section 230 is now a real possibility. Mary Anne Franks and I are currently working with federal lawmakers, both Democrats and Republicans, on potential legislative changes to Section 230. Federal lawmakers have expressed interest in the statutory fix proposed by Benjamin Wittes and me to condition immunity on reasonable content moderation practices. If adopted, the question before the courts in a motion to dismiss on Section 230 grounds would be whether a defendant employed reasonable content moderation practices. Whether a platform acted reasonably with regard to a specific instance of speech would not be the question. Instead, the court would ask whether the platform engaged in reasonable content moderation practices writ large and thus earned the immunity.

Education must play a role as well. Each and every one of us is ultimately responsible for liking, clicking, and sharing the destruction. We have to acknowledge and discuss the human frailties that leak to our unthinking clicking, liking, downloading, posting, and sharing. We have to consider strategies that can help us stop and think before posting, sharing, and liking content that is salacious, provocative, and simply aligns with our viewpoints. As digital citizens, we need to do better. 

About The Author