Analysis and Theory

Section 230: The right incentives?

February 2, 2024

This post was originally published on November 17, 2021 to RSM’s former Medium account. Some formatting and content may have been edited for clarity.

The Incentive Question

Section 230: in a nutshell, it’s a piece of legislation that 1) frees platforms from most liability for what their users say and do on those platforms and 2) offers platforms protection for “good faith” moderation of digital speech. (You can get a more in-depth view on Section 230 here.) Maybe you’ve heard of Section 230, or maybe you haven’t. Even just a year or two ago, most Americans had no idea Section 230 existed. It’s since received immense attention, amidst calls for its repeal and concerns that it allows some of the Internet’s biggest issues, including misinformation and harassment.

Though some perspectives are changing, many digital experts generally agree that Section 230 should, with few exceptions, remain untouched. Their reasoning harkens back to the case that ultimately led to Section 230’s creation: Stratton Oakmont, Inc. (fun fact: this was the firm depicted in The Wolf of Wall Street) v. Prodigy Services Co. (an early online service that also provided subscribers a range of networked features, including bulletin boards). The short version of the story is as follows: an anonymous user posted not-so-nice remarks about Stratton Oakmont and the legality of its operations on a Prodigy bulletin board. Angry, Stratton Oakmont then sued Prodigy for defamation. The New York courts that heard the case ultimately decided that Prodigy could be held responsible for the message. Why? Here comes the important part: because Prodigy exercised “editorial control”; in particular, Prodigy moderated content on its platform. And by moderating content, the court reasoned, Prodigy wasn’t just a “digital library,” it was a “publisher.” And publishing…well, as any newspaper will tell you, that means legal liability for what’s been written.

In Washington D.C., U.S. Representatives Chris Cox and Ron Wyden were alarmed. The Stratton Oakmont case very much disincentivized online moderation: “only if a platform made no effort to enforce rules of online behavior would it be excused from liability for its users’ illegal content.” Cox and Wyden got to work on a fix — Section 230 — which was ultimately included in the larger 1996 Communications Decency Act. (Most of the Act was later struck down in courts, but Section 230 survived.) Today, the fact that platforms aren’t liable for what their users say and do should incentivize them to discourage and moderate harmful (e.g. illegal) content.

But that raises the all-important “incentive question”: does Section 230 actually encourage responsible moderation practices, as Cox and Wyden intended? When, if ever, are these practices tested — and when they do fail? Is it possible Section 230 is actually allowing other — potentially bad — behavior? And what can all this tell us about Section 230 and its future?

Section 230 Gone Wrong

As part of research with the Berkman Klein Center’s “pop-up institute” to reboot social media, I’ve learned that the answer to the “incentive question” is “yes, but not perfectly.” As a few cases illustrate, when Section 230 “goes wrong,” it can go really wrong, and in ways that challenge the purported power of the “right incentives” argument.

Take the story of Matthew Herrick. He was sitting outside of his apartment in New York City when a man approached him, claiming Matthew had invited him over for sex. Matthew had not…and was stunned when the man showed him what looked like a Grindr account in Matthew’s name, a fake profile. Matthew would soon learn that his ex, the person behind the fake account, had not only messaged that man, but many others. Over the next ten months, 1,400 people showed up everywhere: at Matthew’s home, at his workplace…and many were told that if Matthew resisted, “they should just play along.”

Matthew immediately contacted Grindr, begging the company to take down the profile. And Grindr had every reason to: Section 230’s good faith exception was designed exactly for this purpose. Repeated complaints over 10 months did nothing. Matthew tried to seek help elsewhere, contacting the local police department, which brushed him off. An officer even “rolled his eyes” at Herrick. After the account was finally taken down, Matthew ultimately filed a lawsuit against Grindr/its negligence…which was unsuccessful, given the company’s (guess what?) Section 230 defense.

You might argue, though, that Grindr probably didn’t mean to cause the harm — its inaction was an accident! Regardless, unfortunately, platforms can be bad actors too. Take The Dirty, a gossip platform founded by Hooman Karamian, better known as “Nik Richie.” The Dirty’s self-named “Dirty Army” scrounges up “dirt” — however true or false — on anyone and everyone, and anonymously shares it with Richie, who shares his “favorite” dirt on the platform. Unsurprisingly, it’s a breeding ground for defamation — victims include Sarah Jones, a high school teacher who was falsely accused on The Dirty of sleeping with members of the Cincinnati Bengals team and infecting others with STDs, and others accused of having everything from psychiatric disorders to financial trouble. It’s another example of Section 230 gone wrong: The Dirty is undeniably a “bad samaritan,” but it receives Section 230 protections. Nik Richie has never once been held legally responsible for the lives he has ruined.

The cases go on and on — a list far too long for this one blog post. There’s Backpage, a platform that routinely facilitated sex trafficking, often of minors; before Congress carved out an exception in 2018, Section 230 routinely prevented Backpage from being held liable as a “service provider.” (Backpage was later seized by the U.S. Department of Justice.) There’s also Omegle, a platform designed to help strangers connect, 1:1. When users utilize the service, they’re reminded that (thanks to Section 230) the strangers they meet “are solely responsible for their behavior.” In other words, if you meet a sexual predator — an issue the platform has faced and, many would say, poorly tried to address, despite robust Section 230 protections — don’t blame Omegle.

The future of the Internet

So…the bad news is, Section 230 clearly has challenges. On the other hand, the good news is that these cases also clearly articulate both 1) what those challenges are, and 2) how, given those challenges, Section 230 and Internet moderation might need to evolve. There are 3 key takeaways, and 1 actionable step for platforms:

  1. Conflicting incentives challenge Section 230’s basic premise that platforms are always the best at lording over speech: The Dirty and Backpage aren’t just stories about the Internet’s power to harm: they show platforms have conflicting interests when it comes to moderation. The Dirty was a hit because of its objectionable content, and indeed, even on more “mainstream” platforms, like Facebook or Twitter, it’s often gruesome, sexualized, or, most prominently, false content that gets clicks, eyeballs, and attention. (Most recently, a massacre in New Zealand was live streamed on Facebook, where it was viewed several thousand times before being taken down.) The Dirty, in particular, shows how Section 230 can be used as a “shield” by “bad samaritans” — far from what Cox and Wyden had envisioned. Given that, it’s worth encouraging scholarly debate on if and when these bad samaritans merit Section 230 protections — an area of discussion that has seen plenty of attention lately.
  2. Platforms need to rethink Section 230/moderation: At a high level, platforms need to change the way they think about Section 230/moderation. Until recently, many thought that Section 230’s main benefit to the public was that it led to a thriving Internet economy; in the age of Big Tech, however, Section 230’s biggest benefit has become the moderation it (in some cases) promotes…and without effective moderation, Section 230 looks far less appealing. Given that, there needs to be a fundamental shift — for many social media platforms, it’s already under way — in how platforms think about and approach moderation. In some cases (such as Point #3, below), that means identifying and targeting specific issues.
  3. When moderation goes wrong, the folks who lose out are often members of marginalized groups: Finally, as Matthew Herrick’s story makes clear, the burden of “imperfect” moderation — the “cracks” in Section 230, so to speak — tend to fall disproportionately on marginalized groups, including BIPOC, the LGBTQ+ community, and women, among others. Women, for example, are harassed every 30 seconds on Twitter, a troubling example of how certain speech can be suppressed in the name of “more speech”…for others. The issue is particularly concerning given the parallel to real-life inequities: in policy, and in society at-large.

Looking ahead, then, platforms should have 1 definite “action item” on their plates: targeting and tackling these “interaction effects.” That means researching moderation gaps, to understand where they come from and how they impact social media ecosystems; it also means taking meaningful action, whether reallocating moderation resources or even revisiting and rethinking how online communities have been designed. After all, we’re all behind the Internet — and by looking inwards, we might stand a chance at realizing the digital world Section 230 was intended to create.

Trisha Prabhu is a senior at Harvard College in Cambridge, MA. Originally from Naperville, Illinois, at Harvard, Trisha is concentrating in Government, on the Tech Science pathway, and pursuing a secondary in Economics. Her research interests include Section 230 and regulation of online speech and cyberbullying and digital harassment.

This is an independent project developed while the author was working as a research assistant for the Institute for Rebooting Social Media at the Berkman Klein Center for Internet & Society at Harvard University. The Institute for Rebooting Social Media is a three-year, “pop-up” research initiative to accelerate progress towards addressing social media’s most urgent problems. Research assistants conducted their work independently with light advisory guidance from Berkman Klein Center staff.