Analysis and Theory

Engineering and Policy Development

Friends of the Court:
Gonzalez Amici Offer Their Perspectives

February 20, 2023

As part of our RE:COMMITTED newsletter, we checked in with individuals and organizations who filed amicus briefs in Gonzalez v. Google. Legal experts from the Knight First Amendment Institute, Public Knowledge, EPIC, the Cato Institute, and elsewhere weighed in.

For over twenty-seven years, Section 230 has shaped expression on the internet by shielding online service providers from liability stemming from their users’ speech. Ahead of oral arguments in Gonzalez v. Google, one of two cases this term where the Supreme Court has the chance to weigh in, we reached out to ten organizations and individuals who filed amicus briefs, inviting them to elaborate on their perspectives, share their ideal outcomes, and paint a picture of what the future of the internet may look like depending on the Court’s decision. Their responses are below.


Public Knowledge

Answers provided by John Bergmayer, Legal Director. You can read Public Knowledge’s full amicus brief here.

What is the ideal outcome from Gonzalez?

Google should prevail on the law. At the same time, the Court should avoid an overly broad reading of 230 that protects non-publishing activities, such as the collection or sale of personal information.

The outcome also should be clear. The worst outcome might be a loss for Google, but without 5 votes for any given legal rationale. I think such an opinion would throw both platforms and the lower courts into chaos. (A similar result in the other direction would just keep the status quo.)

What do you think the Petitioners/Respondent get wrong, and what intervention does your brief make on this issue?

Contrary to Petitioners, algorithmic recommendations fit the common law understanding of “publication.” There is no principled way to distinguish them from other platform activities that most people who support Petitioner’s side agree should be covered by 230. For example, the attempt by both Petitioners and the DOJ to distinguish search results from recommendations is legally and factually wrong.

What might the court get wrong in this case? How would the web and social media change?

If the Court gets this wrong, it could limit the usefulness or even viability of many services, including nonprofit, decentralized services such as Mastodon, that carry user-generated content and speech. The internet might become more of a broadcast medium, rather than a venue where people can make their views known and communicate with each other freely. And useful features of platforms may be shut down in an abundance of caution.

What other fields and domains might the holdings in this case unexpectedly shape?

Online marketplaces, data brokers, even AI—there is already discussion about how AI models that draw information from the internet are or are not able to benefit from Section 230.

Is judicial interpretation the best way to go about resolving these questions, or would you prefer congressional intervention, however unlikely? What would that congressional intervention look like, and how might it differently tailor the immunities currently provided by 230?

A major theme of our brief is that the Court should avoid legislating from the bench. These issues are too nuanced, and only precise statutory (or regulatory, if we had a digital regulator) language can address specific harms while minimizing unintended consequences.

Where do you think we will be on these issues ten years from now?

I hope that we are not only not having the same debate, but that we have established a better way of addressing these questions in the first place. A digital regulator with oversight over platforms, for instance, can address harms more effectively than the occasional high-profile Supreme Court case.

Return to top


Wikimedia Foundation

Answers provided by Jacob Rogers, Associate General Counsel, and Leighanna Mixter, Senior Legal Manager. You can the Wikimedia Foundation’s full amicus brief here.

What is the ideal outcome from Gonzalez?

The Wikimedia Foundation’s ideal outcome is that Section 230 is not changed by the Court’s ruling, likely coming from a ruling in favor of Google. As we underlined in our amicus brief and blog post, Section 230 protects community-governed websites like Wikipedia and other Wikimedia projects, and it is a critical protection for that model of website creation as well as content moderation.

What do you think the Petitioners/Respondent get wrong, and what intervention does your brief make on this issue?

We share the Petitioners’ desire for an internet that avoids harm to people, including reducing the prevalence of terrorist content. We support efforts to better address these issues through the further development of industry best practices and appropriately tailored lawmaking. However, the Petitioners failed to fully capture crucial elements of the way the internet operates in their arguments, including their position on Section 230. This may have the potential to backfire and create a less safe internet. The new legal risks that websites may face if Section 230 is amended or repealed would likely make addressing harmful content harder, not easier.

Furthermore, as noted in our amicus, we agree with Google that trying to somehow split hairs as to what “publisher” means would only create confusion and uncertainty without improving the ability of websites to address harmful content. Similarly, the attempt to separate “recommendations” from publishing would offer opportunities for inappropriate lawsuits outside of Section 230 protections. This would put significant strain on hosting projects with user-generated content, especially for nonprofits or small and medium-sized online platforms that do not have large litigation budgets.

There is a very real risk that the potential broad interpretation of the term “recommendations” results in censorship of accurate, legitimate content through lawsuits used to chill speech. These arguments around content recommendations create potential risk to individual Wikipedia volunteers who edit and review content submitted by others.

What might the court get wrong in this case? How would the web and social media change?

The worst thing the Court could do in this case would be to deliver an expansive ruling that all “recommendations” are no longer protected by Section 230. Because of how vague the terms used in the Petitioners’ argument are in practice, a broad ruling of this nature poses the risk of creating liability for a vast array of activities by both website hosts and users. We gave several examples of this in our amicus, which covered things as basic as website design and linking between articles.

Even if the Court does not rule so broadly, a decision that creates a very fact-specific requirement for Section 230 protection could be problematic, opening the door to more litigation and back and forth in the courts to determine whether websites meet very specific criteria for Section 230 protections.

If future courts cannot quickly dismiss cases because each case requires looking into the specific details of how each website is designed and what words they use to describe how they arrange content, it will make litigation much more expensive. These costs can be exploited by bad actors who want to use litigation to censor lawful speech.

What other fields and domains might the holdings in this case unexpectedly shape?

The breadth of the arguments being considered by the Court means that a ruling, in this case, could go well beyond algorithms and social media platforms. It will cover nearly all of the internet, including mission-driven, community-governed websites like Wikipedia.

Wikipedia and the other Wikimedia projects are not structured like social media websites. Wikimedia projects accept user-generated content, but under community-created rules for what content is appropriate, such as accepting only notable, encyclopedic, and reliably-sourced content.

Furthermore, Wikimedia projects do not have share buttons or any other method to virally share content like social media websites typically have. Neither do they have “autoplay” nor similar features that would immediately expose users to content that may be harmful. Nevertheless, our understanding of the arguments presented is that any website host that accepts user-generated content, large or small, could be negatively impacted by this ruling.

Beyond Wikipedia, the case may also affect any consumer-facing website that accepts comments or reviews on products and services, news reporting, and any website that hosts user-discussions on a topic. Unfortunately, an unintended and harmful effect is that while the primary case focuses on social media platforms, websites that only accept limited user submissions—such as for building an encyclopedia—may be the most strongly impacted.

Under the theory that Petitioners advance, the choice to accept some content and not others (for example, notable versus non-notable topics on Wikipedia) could be the sort of “recommendation” that would create liability for both users and website hosts. This makes it difficult to host a website dedicated to a specific project—like Wikipedia.

Is judicial interpretation the best way to go about resolving these questions, or would you prefer congressional intervention, however unlikely? What would that congressional intervention look like, and how might it differently tailor the immunities currently provided by 230?

Both judicial interpretation and congressional interventions can be effective remedies to address harmful content online. Although we are worried that this case may lead to a precedent that causes significant unintended harm, we do think that both the courts and congress can help address these issues.

Section 230 has been effectively interpreted in many other court cases over the past two decades, and congressional reform to address various types of harmful content could be effective. We recommend that any future reform looks towards a variety of effective processes and website models.

For example, rather than specify that the website host must itself perform content takedowns or become liable for content it hosts, a better reform would obligate the host to have a process for addressing different types of harmful content that considers multiple effective content moderation models. This would create space for models like the community-governed moderation on Wikipedia, which works effectively to address most issues. Legislators should consider legal language that sets timelines and review processes flexibly for these different models of website governance, such as ensuring that takedowns do not have to be performed so quickly that they prevent discussion among Wikipedia volunteer editors.

Finally, the most effective way to address the worst effects of algorithmic amplification and recommender systems would be through privacy protections that limit platform hosts’ ability to track people across the internet and collect vast amounts of data about users. That data is used to target people with content based on their presumed preferences, and leads to the concerns expressed by the Petitioners that people will receive inappropriately targeted content. Congressional intervention is needed to protect people’s privacy and safety online.

Where do you think we will be on these issues ten years from now?

We are hopeful that a combination of improving technology and a broader social understanding of the multiple issues that can occur online will help to creatively address harmful content over the next decade. In many ways, we have already seen significant improvement between 2015—i.e., when the events in this case occurred—and 2023.

The internet industry has engaged in significant shared efforts to better address terrorist content in particular. Many website hosts, including the Wikimedia Foundation, are in the process of improving policies to address the new types of requests and content issues that have emerged since 2015. We hope to continue to do so in close collaboration with our user communities, rather than through the processes of many social media platforms, whereby only the company manages content moderation.

Further, other jurisdictions are pioneering new laws that leave room for community-led content moderation. We are optimistic that US lawmakers will consider the implications of their work on public interest projects such as Wikipedia, which functions precisely because of its alternative content moderation model.

Return to top


EPIC

Answers provided by Grant Fergusson, Equal Justice Works Fellow, Megan Iorio, Senior Counsel & Amicus Director, and Thomas McBrien, Law Fellow. You can read EPIC’s full amicus brief here.

What is the ideal outcome from Gonzalez?

Obviously, one ideal outcome is that the Court adopts EPIC’s proposed rule for interpreting Section 230—a rule that neither party adopts for themselves. We believe one simple way to apply Section 230 to the modern internet is asking whether a legal claim could be brought against the individual user who posted content. If a user posts defamatory content, for example, the user is liable, not the tech platform. Such a test covers most cases in which Section 230 immunity applies today.

What our rule does not immunize—and what some courts have interpreted Section 230 to immunize—are claims based on harms that cannot be reasonably and fully attributed to an individual user, even when user content is involved. The products liability claims raised in Herrick v. Grindr are a good example: there, alleged harms occurred not only because user content was hosted on a dating app but because the design of that app—and the failure of Grindr to implement user safety features—made harassment possible.

What do you think the Petitioners/Respondent get wrong, and what intervention does your brief make on this issue?

One argument we wish the parties considered more heavily is that platforms’ design decisions can harm users; the act of recommending content is not the only issue at play in Gonzalez. What do we mean by design decisions? Well, every time a platform changes the design of its user interface or the factors its algorithms should consider or the data it collects and uses, it alters how users behave on the platform and what content users create. Platforms seek out user engagement, but so do users trying to build a following. These design decisions can cause harm where simply publishing content wouldn’t, but under existing Section 230 case law and the Respondent’s brief, these harms would be inseparable from immunized platform functions.

Our amicus brief details the types of harm that don’t neatly fall into the user-content-or-platform-speech dichotomy, and we hope the Court considers the technical and design contexts around recommendation algorithms when deciding the case.

What other fields and domains might the holdings in this case unexpectedly shape?

Several amici focus less on Google’s use of an algorithm and more on the act of recommending content, but the Court’s opinion in Gonzalez could meaningfully impact how courts consider A.I. and algorithmic systems outside of the speech context. Myriad companies—both social media platforms and companies in other industries—have adopted some forms of A.I. or automated decision-making systems. These tools are used for far more than content recommendations; they recommend criminal sentences, attempt to detect fraud, screen job applicants, and so much more. However, courts have only begun to seriously consider how these tools might impact individuals’ rights and freedoms. With Gonzalez, the Supreme Court has the opportunity to decide not only whether recommending content falls within Section 230, but also whether the use of algorithms changes whether and when a platform is liable. This interpretation may shape how courts treat algorithmic tools in a wide variety of other fields, which is why EPIC’s brief included a detailed discussion of ways that algorithms can produce harm, even beyond the confines of strict content recommendation.

Is judicial interpretation the best way to go about resolving these questions, or would you prefer congressional intervention, however unlikely? What would that congressional intervention look like, and how might it differently tailor the immunities currently provided by 230?

While judicial interpretation offers the chance to reconsider legal approaches as technologies change, we’ve seen what more than two decades of judicial interpretation can do to platform liability—big tech giants, backed by elite legal teams, have crafted an expansive interpretation of Section 230 immunity that many courts have all but adopted. EPIC’s brief argues that a more limited approach to Section 230 immunity isn’t a reinterpretation of the statute, but a return to its original meaning. Asking the Supreme Court to correct prior courts’ interpretations and return to the statute’s original meaning is a valuable endeavor, but it’s not a perfect solution.

That’s why, in addition to filing our amicus brief, EPIC has chosen to support the SAFE TECH Act, which was recently re-introduced by Senators Mark Warner (D-VA), Mazie Hirono (D-HI) and Amy Klobuchar (D-MN). In particular, EPIC supports the bill’s intent to exclude injunctive relief from Section 230 immunity, allowing victims to pursue court orders to stop harmful platform decisions.

Return to top


Knight First Amendment Institute

Answers provided Scott Wilkens, Senior Counsel. You can read the Knight First Amendment Institute’s full amicus brief here.

What is the ideal outcome from Gonzalez?

The best reading of Section 230 would immunize internet platforms for their use of recommendation algorithms except where those recommendation algorithms materially contribute to the harm being alleged in a way that goes beyond the mere amplification of speech.

To be sure, interpreting Section 230 in this way would immunize some conduct that causes real harm, because some harmful conduct is the result of mere amplification of speech. But categorically excluding recommendation algorithms from Section 230’s protection would have devastating consequences for free speech online. It would require internet platforms such as search engines and social media platforms to remove large swaths of content in order to avoid the possibility of crippling liability.

What do you think the Petitioners/Respondent get wrong, and what intervention does your brief make on this issue?

Although the petitioners’ position has changed significantly over the course of the litigation, the main error they make is to interpret Section 230 not to protect the mere amplification of speech. The petitioners try to limit “publishing” in the online context to hosting speech, as distinct from recommending speech, but that distorts the plain meaning of the term. Publishing a magazine or newspaper, for example, necessarily involves recommending the content being published, and even more so stories placed prominently. The same is true of publishing a list of search results or publishing a social media feed.

Our amicus brief argues that YouTube is shielded by Section 230 in this case because the petitioners are seeking to hold YouTube liable for amplifying certain content, and amplification, without more, does not amount to a material contribution to the alleged illegality. The brief also argues that the use of recommendation algorithms to do more than merely amplify speech falls outside of Section 230’s immunity if it materially contributes to the alleged illegality. The circuit courts have applied the material contribution test to immunize mere recommendation or amplification of content, but to leave room for other kinds of claims against the platforms.

What might the court get wrong in this case? How would the web and social media change?

It’s conceivable that the Court could categorically exclude recommendation algorithms from Section 230’s immunity. The Court could do so by saying that when a platform recommends content to users, the platform is not protected by Section 230 because it is going beyond acting as a publisher.

If the Supreme Court does so, it will have a drastic, negative impact on free speech online. Many internet platforms, including search engines and social media platforms, provide services that are largely if not entirely dependent on recommendation algorithms. As a result, it would be impossible for these platforms to avoid massive liability by no longer using recommendation algorithms. They would have no choice but to remove large swaths of constitutionally protected speech—any speech that could potentially result in a lawsuit.

What other fields and domains might the holdings in this case unexpectedly shape?

While this case is about recommendation algorithms, it could impact any online service that is potentially eligible for Section 230 immunity, meaning any online service that disseminates third-party content. This is the first time the Supreme Court will interpret Section 230, including critically important terms like “publisher,” which the statute doesn’t define. The Court’s decision will have ramifications not only for existing online services, but also future ones. Anyone who wants to develop a new online service that disseminates third party content would probably think twice if the Court’s decision makes it doubtful that their service would be protected by Section 230. One has to wonder whether the absence of Section 230 immunity for recommending content to users would have inhibited the invention or development of search engines like Google or social media platforms like Facebook.

Is judicial interpretation the best way to go about resolving these questions, or would you prefer congressional intervention, however unlikely? What would that congressional intervention look like, and how might it differently tailor the immunities currently provided by 230?

Legislative action would be far preferable to a judicial interpretation of Section 230’s immunity that categorically excludes algorithmic recommendations. Again, the amplification of content can cause real harms–no one should pretend otherwise. But legislatures can address or mitigate the harms associated with amplification through other mechanisms, including by requiring platforms to be more transparent, establishing legal protections for journalists and researchers who study the platforms, limiting what information platforms can collect and how they can use it, and mandating interoperability and data portability.

Return to top


Eric Goldman

Eric Goldman is a Professor of Law at Santa Clara University School of Law in the Silicon Valley. He also co-directs the High Tech Law Institute and supervises the Privacy Law Certificate. You can read his full amicus brief here.

What is the ideal outcome from Gonzalez?

The best possible outcome is that the Supreme Court preserves the legal status quo and user-generated content survives for a little longer.

What do you think the Petitioners/Respondent get wrong, and what intervention does your brief make on this issue?

My brief focuses on the interplay between Section 230 and the First Amendment. Substantively, First Amendment doctrines may yield some (but not all) of the same outcomes as Section 230. However, compared to the First Amendment, Section 230 acts as a procedural fast lane to reach those outcomes. My brief explains how Section 230’s procedural benefits are critical to Section 230’s efficacy, and why turning more cases into Constitutional litigation will have numerous negative effects.

What might the court get wrong in this case? How would the web and social media change?

Any change to the Section 230 status quo will shrink the Internet–likely dramatically. Current publishers of user-generated content will take away the power to publish from many or all of their existing user-authors. Some of those publishers will switch to publishing professionally produced content. Collectively, those developments will reduce the opportunities for authors and creators from marginalized communities and lead to paywalls that will deepen digital divides.

What other fields and domains might the holdings in this case unexpectedly shape?

A good example is the online advertising market, which depends heavily on “targeted recommendations” that are currently protected by Section 230. If Section 230 no longer applies to that advertising, it will raise the cost of advertising, reduce the quantity of ads run, and likely substantially reduce the revenue earned by ad-supported publishers. The revenue decreases would mean less content would be freely available to consumers and less content overall would be generated.

Is judicial interpretation the best way to go about resolving these questions, or would you prefer congressional intervention, however unlikely? What would that congressional intervention look like, and how might it differently tailor the immunities currently provided by 230?

Congress is also excited to undercut the Section 230 status quo. This reinforces how unlikely it is that the Internet will survive these government attacks intact.

Where do you think we will be on these issues ten years from now?

I think Web 2.0 will be a fond but distant memory of Gen Xers and Millennials. In ten years, instead of talking to each other online, we will be paying hundreds or thousands of dollars a year to access paywalled content databases available over the Internet, which to Gen Alpha will be the only form of the Internet they ever knew.

Who do you think will win?

I’m not sure who will win the court decision, but the odds are high that most Internet users will feel like they lost.

Return to top


Cato Institute

Answers provided by Will Duffield, Policy Analyst. You can read the Cato Institute’s full amicus brief, submitted with R Street Institute, here.

What is the ideal outcome from Gonzalez?

Ideally the Court will recognize the wisdom of Court of Appeals decisions about algorithmic recommendation and rule in favor of Goole, reifying a textual understanding of Section 230 that has safeguarded a diverse ecosystem of platforms, apps, and websites for more than twenty years. I hope the Court appreciates that liability for recommending speech can be just as stifling as liability for hosting it, and that organizing speech is a core part of publishing.

What do you think the Petitioners/Respondent get wrong, and what intervention does your brief make on this issue?

Petitioners attempt to distinguish recommendation from organization and publishing, and argue that recommendation is unprotected because it is not a “traditional editorial function.”

The first premise is faulty because, as we write in our brief, “publishing inherently involves prioritizing some speech over others, such as placing one article under the front-page headline and another on page 35. If displaying some content more prominently than others is “recommending,” then recommending is inherent to the act of publishing.”

Our brief also explains that the “traditional editorial functions” test is merely a tool within the traditional Barnes analysis, not a standalone, historically-derived delineator of Section 230’s scope.

Using the three part Barnes test, Section 230 protects (1) “interactive computer service[s]” (2) from claims that treat them as a “publisher or speaker” of (3) content provided by another “information content provider.” Thus, determining whether or not the platform was engaged in “traditional editorial functions” might be useful in deciding if a suit treats it as a “publisher”, there is no textual basis for using it as a standalone test. If the editorial functions test in its traditional context, algorithmic recommendation seems similar to other modes of editorial organization.

What might the court get wrong in this case? How would the web and social media change?

Because the Supreme Court has not previously interpreted Section 230, regardless of who it rules for, it risks disrupting a great deal of seemingly settled lower court precedent, creating uncertainty where there were previously settled rules of the road. If the legal implications of hosting, recommending, or returning user searches for controversial speech are unclear, platforms are likely to err on the side of caution, to the detriment of speech and speakers.

What other fields and domains might the holdings in this case unexpectedly shape?

Lots of services engage in algorithmic matching but aren’t often considered in political conversations because they aren’t arenas of politics. They risk ending up as collateral damage here. No one is worried about radicalization on dating apps or in digital classified ads, but they will be affected by liability for recommendations too. What if your Tinder match is abusive or a carpenter found on Angie’s List destroys your roof? Under current interpretations of Section 230, the contents of recommended dating or tradesmen profiles are solely their authors’ speech, but Gonzalez could change that.

The court’s ruling might affect search engines too. Search requires a singular user input rather than gleaned preferences, but still relies on algorithms to organize and rank information. Many search tools rely on past user activity in addition to the searched keywords. Product searches within modern shopping, travel, and real estate sites all routinely provide results tailored to the user. A ruling that attempts to separate active search from algorithmic recommendation still risks breaking these services.

Is judicial interpretation the best way to go about resolving these questions, or would you prefer congressional intervention, however unlikely? What would that congressional intervention look like, and how might it differently tailor the immunities currently provided by 230?

Judicial interpretation seemingly resolved the question of whether Section 230 protected algorithmic recommendations in cases such as Force v. Facebook in the 2nd Circuit and Dyroff v. Ultimate Software in the 9th Circuit. These rulings spurred legislative proposals to amend Section 230 to exclude algorithmic recommendations from its protections. The Protecting Americans from Dangerous algorithms act would have required platforms to play oracle about which connections or events might lead to harm, but at least its authors appreciated that as written, Section 230 protected algorithmic recommendation.

A focused congressional debate about whether or not liability for algorithmic recommendations is advisable is better than muddling that debate with a conversation about what Section 230 does or does not actually protect. Here the Supreme Court risks becoming an arbiter of normative debates about the internet that recent Congresses have been too divided on to resolve.

Where do you think we will be on these issues ten years from now?

Ten years is a long time on the internet. Ten years ago social media was being celebrated for mobilizing the Arab Spring, no one knew what the NSA was, and the Obama Campaign’s Facebook friend data harvesting was deemed innovative. Times have changed.

But, ideally, a decade from now we will be more at peace with the internet. We will better appreciate that misinformation and extremist speech are mostly demand side issues and work to solve their root causes. We will understand that propaganda can be addressed but not prevented outright. We will blame the messenger less, and trust users more.

Individuals will gain greater control over the digital tools they rely on. Current algorithmic search and sorting tools are usually bundled with particular social graphs or storefronts. This puts the platform operator in the impossible position of trying to come up with a single set of rules that satisfies everyone. Whether at the platform level, in court, or in Congress, debates about what people should see are utterly intractable because different people have different preferences. So the best way out, and really the only practical way out, is to give users more options to control their online experience. In the past this sort of control has required a lot of work or technical sophistication on the part of users, but AI seems on the cusp of changing this.

Return to top


Anti-Defamation League

Answers provided by Steve Freeman, Vice President of Civil Rights and Director of Legal Affairs. You can read the Anti-Defamation League’s full amicus brief here.

What is the ideal outcome from Gonzalez?

When there is a legitimate claim that platforms played a role in enabling hate crimes, civil rights violations or acts of terror, victims deserve their day in court. To date, the overly-broad interpretation of Section 230 has barred plaintiffs from being able to seek accountability through the courts.

Social media companies like the defendants here, should not be automatically immunized in a blanket manner from responsibility for targeted recommendations of terrorist content, or for allowing organizations like ISIS to use their platforms to promote terrorism and obtain recruits.

We believe that the provision of Section 230 that empowers platforms to moderate hate and harmful online content is crucial and should not change. We believe the other provision that provides near-blanket immunity from liability for platforms—has been overly-broadly interpreted by courts and needs to be updated.

What might the court get wrong in this case? How would the web and social media change?

The Supreme Court could reach results that lead to dangerous consequences, making it more difficult for platforms to moderate content in ways that fight hate.

What other fields and domains might the holdings in this case unexpectedly shape?

This case could significantly change the landscape for social media accountability for spreading terrorism, hate, and extremism that result in legally actionable harm.

Is judicial interpretation the best way to go about resolving these questions, or would you prefer congressional intervention, however unlikely? What would that congressional intervention look like, and how might it differently tailor the immunities currently provided by 230?

Section 230 was enacted before social media as we know it existed, yet it continues to be interpreted to provide technology companies with near-blanket legal immunity for not only third-party content, but even for how their own tools are exacerbating hate, harassment, and extremism.

ADL strongly believes that Section 230 has to be updated to fit the reality of today’s internet. We believe it is Congress’s responsibility to update Section 230 to clarify what is and is not covered by the law. Congress should update Section 230 to better define when platforms should have immunity from liability and what type of platform behavior should not be covered. While platforms shouldn’t necessarily be accountable for user-generated speech, they should not be granted automatic immunity for their own activity that results in legally actionable harm, whether that be through auto-generated content, designing tools that are used to discriminate, or other dangerous product features.

In addition to ensuring platforms are not immunized for bad behavior, we believe that updating Section 230 will incentivize social media platforms to more proactively address how things like recommendation engines and surveillance advertising practices are exacerbating hate and extremism, which leads to online harms and offline violence.

Where do you think we will be on these issues ten years from now?

Hopefully meaningful action will be taken to make our internet a safer place in the next ten years. However, Section 230 reform is just one piece to the puzzle. Ultimately, online hate will not be fixed by a single, one-size fits-all solution—including any single piece of legislation. This is because hate manifests from the darkest corners of the internet to platforms with hundreds of millions of monthly active users. In order to truly create a safer internet where hate and extremism are not free to spread unchecked, we will continue to advocate for a comprehensive approach, to include: transparency legislation, ending the surveillance advertising business model, better content moderation practices, and other factors that impact the ecosystem to achieve comprehensive reform.

Return to top


Cathy Gellis

Cathy Gellis is a former Internet professional and webmaster now assisting clients with legal issues related to the digital age. She is the counsel of record for the amicus brief submitted by Chris Riley, the Copia Institute, and Engine Advocacy. You can read the full amicus brief here.

What is the ideal outcome from Gonzalez?

The ideal outcome is one that reaffirms Section 230 as an intentionally broad statute that is widely applicable, with unequivocal language in the decision that leaves no doubt as to its expansive reach in order to deter further attacks on it and to help guide later courts so that they can resist being persuaded otherwise.

The only non-disastrous outcome, however, is one that at least doesn’t leave Section 230 further undermined than it already is thanks to an influx of tepid rulings that keep trying to rewrite it as a narrow statute only sometimes applicable. Avoiding this outcome requires more than just finding for Google in this particular case and upholding the Ninth Circuit’s earlier decision; the Court really needs to bite its tongue and not hypothesize in any further dicta about potential limits to the statute. These analytical dalliances it has already engaged only fuel other courts’ bad decisions by eroding judicial understanding of how the law works and why, which in turn undermines the statute’s critical protection. Thus the ideal outcome is for the Court, and all its justices, to resist the temptation to pursue such uninformed speculation here, even if it comes in the form of technically handing Google a win.

What do you think the Petitioners/Respondent get wrong, and what intervention does your brief make on this issue?

The Petitioners understand that Section 230 acts to protect an Interactive Computer Service (ICS) that helps facilitate others’ content, but not the third party Information Content Provider who provides that content. To avoid that bar to their claims against Google, they try to recast the content provided by third parties as content belonging to the ICS, as if by choosing how to facilitate someone else’s content it now makes that content their own. But if that were the way the statute worked it would eviscerate why we have it in the first place: we have it so that there can be ICS available to facilitate others’ content. If the act of facilitating it could waive the protection they need to facilitate it, then they won’t be able to do that facilitation anymore. It also would not be logical if the act of deciding how to display content someone else created could somehow amount to creating the content, because how could it possibly, since that content had obviously already been created by someone else.

But while this particular read of the statute is absurd, it isn’t the first time a plaintiff has tried to argue that an ICS created the offending content, because that question of who created the content is what Section 230 hinges on. And in other cases the answer may be a much closer call than here. So we offered the Court a new test to use, which is who created (or “imbued”) the allegedly wrongful quality in the content at issue. It won’t be the ICS, if all they did was enable its display, no matter how prominently. As the Ninth Circuit in the Roommates case long ago suggested, it would take something considerably more.

What might the court get wrong in this case? How would the web and social media change?

There are a lot of popular myths about Section 230, including the belief that it is a special privilege that somehow gives undue favor to certain companies and disincentivizes their better behavior. None of these misapprehensions are correct, which is why even slightly affecting Section 230’s intentionally broad application will be so devastating to the Internet at large.

In particular, Section 230 is not about providing some sort of subsidy to big companies. In fact, at the time Section 230 was passed, these big companies didn’t exist. What Section 230 did was create an ecosystem where new online services could develop. And the more displeasure we may now have for how these ones developed, the more important it is that we preserve a regulatory environment where these new alternatives can exist.

Section 230 is critical to preserving that possibility because what it does is make it possible for new services to come along without having to worry about being obliterated by litigation costs. The issue isn’t even one of ultimate liability; the problem is that having to defend against any lawsuit, even an unmeritorious one, is so draining. If online services had to fear being sued for how any of their users used their sites, let alone how potentially all of their many users did, it would make providing those services to users functionally impossible. Section 230 exists to make sure those service providers don’t have to worry: they can be available to help users, and also help moderate their services so that they can be useful to users, without getting obliterated by litigation challenging how they do either. If Section 230 incentivizes anything, it is those behaviors, facilitation and moderation, to the best degree the services can manage by making it safe for them to make their best efforts to try. Curtailing the statutory protection puts their ability to do these things in doubt by opening the door to potentially unlimited litigation and make it impossible for services to do either. Users will lose out on alternatives to the current incumbent services, and even incumbents may have to start turning them away too.

What other fields and domains might the holdings in this case unexpectedly shape?

The consequences of this case stand to reach far beyond Section 230 itself. The editorial activity that the First Amendment protects is directly in the line of fire, and if the Court doesn’t see what is at stake it threatens to do real harm to the First Amendment overall.

What one chooses to recommend is an expressive decision that the First Amendment most decidedly protects and must protect. We can easily understand this freedom in an offline context, because we can feel the freedom we have to recommend something to someone else, or to disparage something we don’t like and don’t want to be associated with. We can even see how that dynamic works with expression, because we can easily understand how a bookstore owner shouldn’t have to carry a book it doesn’t like, while at the same time the same store owner should be at liberty to promote in its store window books that for whatever reason it wants to promote. It should also be free to arrange any of the expression created by others that it offers any way it likes, sorting them by title, author, subject, category, size, color, or any other feature it chooses. (The Supreme Court has also recognized this freedom in the context of newspapers, finding that the First Amendment prohibits any newspaper publisher from having to run op-eds that it does not wish to be associated with.)

Content moderation is just making these same sorts of editorial decisions that are made in the offline context. And algorithmic tools simply help them be made in volume, because Internet platforms often find themselves handling far more than the finite number of items of their offline cousins do. In the face of a potentially infinite firehouse of third party content they may need algorithmic tools to apply their editorial decisions at scale. Those decisions themselves may turn out to be good, bad, or otherwise, but algorithmically-implemented or not, they are still the sorts of decisions they are Constitutionally entitled to make, online as well as off. But if the Court treats algorithms as some sort of special magic outside the reach of the First Amendment, then all of the protection the First Amendment affords for any of this editorial discretion, online or off, is at risk.

Is judicial interpretation the best way to go about resolving these questions, or would you prefer congressional intervention, however unlikely? What would that congressional intervention look like, and how might it differently tailor the immunities currently provided by 230?

There is an opportunity for the Court to do some good here, reaffirming the broad reach of the First Amendment in the online realm, but it comes at great risk. While it could potentially be helpful for the Courts to defend the scope of Constitutional protections, such as that afforded by the First Amendment, when it speaks in a way that ends up limiting those rights the effects can be catastrophic.

The risk is particularly high here because the specific case in question is not about the Constitutionality of any particular legislative action but simply the interpretation of a perfectly Constitutional statute. The Petitioners are asking for an interpretation that effectively changes the statute because they don’t like its policy effects. But such policymaking is dangerous for courts to do, because if they effectuate a policy that is harmful, then there is no obvious recourse. At least if legislatures get policy wrong the public can vote them out, but there is no such direct accountability to the public if the courts get a policy wrong. Furthermore, it would tie the hands of legislators if they could not write the statute they meant to write because the Supreme Court didn’t like the policy they wrote it to vindicate. The public needs to be able to instruct their representatives to write the laws that match their policy desires, but they can’t do so when the otherwise Constitutional policies the legislature tries to implement can be subject to veto by a branch of government they have no political power over.

At the same time, while it should be exclusively within Congress’s purview to change Section 230, it would still be ill-advised for it to do so. The policy it resolved upon 25 years ago has been a good one that has indeed stood the test of time. Which is not to say that everything is perfect with the Internet, but without the critical statutory protection Section 230 provides everything would be much worse, with the Internet also affording far fewer benefits than it currently can and does. Such an outcome is threatened if Congress were to try to snip away at the statute and make it less clearly applicable then it was originally drafted to be. Such legislative efforts should therefore not be attempted—but at least if it were Congress that changed the statute, and in doing so wrecked the Internet, the legislators who had engaged in such foolishness could at least in theory pay the price politically and be voted out.

Where do you think we will be on these issues ten years from now?

The consternation surrounding Section 230 seems to be a byproduct of some deep cultural misunderstanding and cynicism about the First Amendment generally and how it operates online specifically. Where we will be in ten years depends on whether the pendulum can swing back to a place where people can understand how Section 230 vindicates the First Amendment and appreciate that critical job it does, and also better appreciate the important work the First Amendment itself does protecting free expression too. Sadly, it might take losing the protection of the statute (or the First Amendment) to see what the cost would be of losing it. It is a cost that may be so high that it could be hard to recover from, so hopefully we can avoid such a legal fate before it’s too late.

Who do you think will win?

There is a good chance Google will win the litigation battle here; the question however is whether it and the Internet may lose the greater war in the process.

Google will win because the petitioners’ Section 230 argument would effectively eviscerate the whole point of Section 230, and the Court may see how statutorily impossible such an outcome would be. The Petitioners’ also make a weak causal argument for the ICS being at the root of their injury, and the Court may see that connection being too tenuous to reward with a remedy here. Furthermore, as a technical matter, the Petitioners’ brief strayed from the question the Court granted its review to examine, and the Court may notice that procedural foul and conceivably dispense with the case by declaring its review as being “improvidently granted” and thus never reach a decision on this particular case at all.

The issue however is that it’s not enough for Google to win. The language the Court uses to resolve the case will shape how Section 230 is interpreted in the future, and it would not take much for the Court to undermine its utility if it misunderstands the statute or engenders doubt about its expansive function with any of the language it uses to render its decision. There is a lot for the Court to get wrong here even in pursuit of a result that would be right, and for the case to truly be a win for Google, any other platforms, and the Internet at large, it needs to avoid giving Google a win in any way other than unequivocally.

Return to top


Integrity Institute

Answers provided by Sean Wang, Partnerships & Collaborations Manager. You can read the Integrity Institute’s full amicus brief, written with Algotransparency, here.

What do you think the Petitioners/Respondent get wrong, and what intervention does your brief make on this issue?

In a case that could have drastic impacts on how social media and other tech platforms design their algorithm-based recommenders in view of liability, it is of the utmost importance for the Court to carefully distinguish algorithms based upon the nature of the recommendations they make. Our amicus brief, in support of neither party, offers an independent explanation on the technology to ensure that the Court has an accurate understanding of how recommender systems (colloquially “algorithms”) operate.

We further explain the three common types of “algorithms” used in large tech platforms in a summary sheet, and our brief urges the Court to decide narrowly based on technical specificity and nuance.

Is judicial interpretation the best way to go about resolving these questions, or would you prefer congressional intervention, however unlikely? What would that congressional intervention look like, and how might it differently tailor the immunities currently provided by 230?

Discussions about “algorithms” in Gonzalez v. Google often make them out like black boxes. While the technical ways companies optimize their ranking systems are indeed complex, what they are optimized for is not. In our brief and on our website, the Integrity Institute has laid out how to understand – and inspect – ranking systems. If we had mechanisms to ensure companies like Google have well-thought out transparency about their ranking systems, we could have easily understood whether Gonzalez v. Google was a one-off tragedy or part of a pattern of behavior. Transparency would also resolve many similar disputes if it is focused on a framework of system design and not of moderating specific contents, optimizing for engagement and growth, or even avoiding specific harms. Instead, society, lacking that, is going through an expensive court case that cannot give a definitive answer.

As technical experts who work best as honest brokers on technical solutions in platform regulation and governance, it is not the Integrity Institute’s expertise to lay out the judicial or legislative tactics that would help us arrive at those regulatory mechanisms. We can be an important part of the solution, however. Besides making integrity workers’ knowledge available to the public like what we did in our brief, we also demand companies do two things overall. First, make transparent the systems, optimizations, and design choices we see affecting the product we are fixing. Second: place integrity work front and center, so that integrity professionals are empowered to do their jobs shifting the optimization of products away from short-term company goals and more towards helping individuals, societies, and democracies thrive.

Return to top


Kent School District

Answer provided by Paul Brachvogel, General Counsel. You can read the Kent School District’s full amicus brief, written with Seattle School District, here.

We hope the Supreme Court takes the opportunity to make clear that social media companies are responsible for harm they cause. When a social media company knows illegal content is on its platform and does not remove it—or worse, actively promotes or recommends it—nothing in section 230 shields the company from liability. The actions of these companies have played a major role in causing a youth mental health crisis and Kent School District, like school districts across the country, is one of the main providers of mental health services for school-aged children in the community. That is why Kent School District has filed a lawsuit to protect its students and secure the resources it needs to address this crisis.

Return to top