Analysis and Theory

Gonzalez and Taamneh: What You Need to Know

February 20, 2023

RSM Research Assistant and Harvard Law School 2L Dylan Moses argues the importance of Section 230 and explains the legal nuts and bolts of Gonzalez and Taamneh.

Access the internet, algorithms are curating what content you see. From the email promotions you receive through Google, Apple’s thoughts on an article you might like to read, and Amazon’s suggestions for which shirts go best with the pants you just bought. Algorithmic curation is a hallmark of the modern internet, but this might soon change. The Supreme Court is likely to upend this status quo by removing the legal protections these companies have enjoyed. And that would be a mistake.

Section 230 of the Communications Decency Act (CDA 230) has fundamentally shaped those algorithmic experiences. Passed twenty-seven years ago, the law shields internet companies from liability when they publish third-party content. And that makes sense. Many of the services we use today would not survive if they were liable for everything that a bad actor uploaded to their site.

These systems which personalize user experiences are the present-day equivalent of publishing. While newspapers and magazines prioritize the order in which stories appear in their publication, internet publishers use algorithmic curation to personalize content that is most relevant to you.

The issue is that these personalized experiences can lead to deadly outcomes. Algorithmic curation is at the heart of the two cases currently before the Supreme Court. The cases hinge on whether Google and Twitter knowingly directed terrorist propaganda when they curated what their users saw, which in turn led to the deaths of their loved ones.

Together, the cases symbolize what some on the Court feel is the problem with CDA 230: it’s an undefeatable liability shield that protects already too-powerful internet giants. And it’s a protection that Congress could not have possibly intended.

While Congress intend CDA 230 to protect websites from publishing someone else’s speech, courts have interpreted CDA 230 far more broadly. And if the Court’s last term is any indication of its commitment to history and tradition, we can expect it to zero in on that fact.

Lower courts have held that algorithmic recommendations are a tool for directing content, but they aren’t content in and of themselves. Yet, sweeping rulings have essentially immunized the platforms from curation that foments genocide, incites mass shootings, and propagates election misinformation.  This odd imbalance in the law has led Justice Thomas in particular to believe that lower courts have strayed too far from the “natural reading” of the statute. It’s not a stretch to think that many of his compatriots on the Court think the same.

In reality, these cases are a scapegoat for our frustration with the tech giants.  We like the promotions, news articles, and new outfit suggestions; but we dislike all the hate, the vitriol and the fact that they seem to make billions of dollars on top of it all. But the Court is not the right vehicle to vent our frustrations.

A ruling against the platforms here will have ripple effects beyond just this case. The algorithms at issue are the same ones that help startups get their products in front of new audiences and help students learn about the world around them. Job seekers use them to find new employment opportunities and newspapers rely on them to amplify breaking stories. The Court needs to understand that even a narrowly tailored opinion would have major implications for not just the platforms but for anyone who relies on these services to deliver timely, relevant information.

And that’s not all that’s at stake. Consider the anti-speech legislation we’ve seen from states like Florida and Texas that seeks to hold platforms liable for their content moderation. Or the backlash against LGBTQIA+ communities in our schools. Or the crackdown on the recommendation of abortion services. A paring back of the law will force internet companies to deal with every new  culture war boogeyman of the day. And many, even those not directly in the business of social media, will chill speech to avoid liability. That would be a disaster.

We don’t need the Court divining Congress’ intent behind 230, especially when Congress agrees the law needs to be updated. Congress is just across the street; if it wants to, has the tools to fix it.

One area could be determining standards for liability. For example, it’s unlikely that Google or Twitter knowingly directed terrorist content at users on their platforms. Although the algorithms are under a company’s control, the processes are automated. There are ways to optimize for finding and removing bad content, but it’s not an exact science, and errors occur often.

However, that shouldn’t let reckless or negligent behavior off the hook. We want to promote innovation both in the way that internet companies deliver products and services, and also in the way that they ensure the trust, safety and integrity of those products and services.

Congress could create a cause of action under CDA 230 which allows the Attorney General to file a lawsuit against companies when their algorithms deviate from commonly accepted standards and lead to real-world harm. It could provide federal agencies with funding to promulgate and enforce regulations when internet companies deviate from those standards. This is especially the case when we know the company actively profited from egregious acts.

There are also so many other questions. Should Google be regulated the same as an upstart website? How large should fines be for violations?  How close should the connection between the platform’s recommendation and the resulting act be for liability to incur? A body that can call investigative hearings, deliberate with technical and civil society experts, establish flexible frameworks for when and how liability should be imposed, and delegate power to regulatory agencies to ensure enforcement is far better than nine lawyers determining the rules for online governance. And, allows for the chiseling that Congress is uniquely positioned to do.

CDA 230 was built for the nascent internet. Over the last 27 years, the internet has wildly changed, for better or for worse, because of the protections the law provides. It would be disingenuous to think that everything shielded by CDA 230 today is exactly what Congress intended a quarter of a century ago. But that’s exactly why Congress – and not the courts – should remedy this.

Dylan is a law student at Harvard University where he is a student member of the Federal Communications Bar Association and focuses on the intersection of law, public policy, and internet-powered technologies.

Prior to his graduate studies, Dylan spent time working on the social and technical sides of the Internet. He was a fellow at the Berkman Klein Center for Internet & Society and held several roles in content policy and operations at Facebook and YouTube focused on mitigating the risks of online hate speech, terrorism, and misinformation.