In case you missed it, the tech law community had its equivalent of the Super Bowl this past month. The US Supreme Court heard long awaited oral arguments in the Section 230 cases on Gonzalez & Taamneh. Here are a few takeaways:
Section 230 Might Be Safe for Now
If one thing was clear from the hearings last week, it’s that the Court does not seem as eager to re-work Section 230 as many might have thought.
In the first place, the Justices were confused as to how recommending third-party content was different from publishing it – a function squarely protected by 230. The Gonzalez plaintiffs argued (unconvincingly) that once a platform provides third-party content a user did not ask for, it magically transforms that content into the platforms’ own speech; extinguishing 230’s protections.
That would be a curious understanding of 230, especially given the way the law has been interpreted for the last quarter-century. Results from search engines, content surfaced by job boards – all these forms of algorithmic curation would be subject to liability if they promoted defamatory or otherwise unlawful information.
Google’s lawyer took a different route. Ms. Blatt argued that recommending third-party content is closer to the traditional editorial functions that newspapers or magazines use to draw a reader’s attention to a story, rather than an endorsement of the content itself. Under her theory, the difference is between, say, the New York Times promoting the comments of some of its readers and the Times publishing its own op-ed. The former is third-party content, the latter clearly is not.
While the Court did not wholly embrace Google’s view, the Justices’ questions suggested that granting the Gonzalez plaintiffs’ argument would render Section 230 meaningless. 230, of course, was meant to shield platforms from the liability created by third-parties’ content. Allowing plaintiffs to sue platforms for recommending that content would be an end run around the law.
Second, and more importantly, the Justices recognized the implications of removing 230’s liability shield. Justice Kagan expressed skepticism over the Court’s role in determining when algorithmic curation becomes more than just a recommendation, given the technical expertise required. Justice Kavanaugh was one of the first to mention that ruling against the platforms could have effects on other industries aside from just social media. Even Justice Barett wondered openly if the Court could decide Taamneh and leave 230 as is.
The Court’s inability to draw distinctive lines for 230 liability as well as its recognition of the ripple effects a ruling might have on other industries are good indicators of a favorable ruling for Google and its amici.
Twitter Probably Isn’t Aiding & Abetting ISIS
The Taamneh case dealt more with the heart of the plaintiff’s claims – can platforms be liable when terrorists misuse their services. Key to the court’s analysis was understanding what the platform knew about its activities, and whether those activities substantially aided a terrorist attack.
The Justices went back and forth with Twitter and the Government on how much a platform needed to know to aid and abet terrorist activity. Did the platform need to have a general awareness that ISIS was using its services to promote and coordinate violence? Or does the platform need to know of the specific accounts coordinating specific terrorist acts? How much aid is sufficient to call it substantial assistance?
However, the plaintiffs barely allege that Twitter was directly involved in helping ISIS carry out terrorist attacks.That wouldn’t make sense if they did; supporting terrorist activities are antithetical to the company’s business model. So instead they argue that the platforms are liable because they are generally aware of terrorists using their services.
Justice Alito seemed unconvinced by that argument. He took it to its logical end point – would gas stations, telephone companies, utilities companies be liable for supporting terrorists even without knowing the specific person, the nature of the support, and how much that support mattered?
Other Justices thought this logic was shaky. Justice Thomas was skeptical about assigning liability to a company that provides legitimate, generally available services when a terrorist uses them. Justice Kavanaugh asked whether CNN would be held liable for aiding and abetting Osama Bin Laden’s terrorist attacks, given he waged war on America using their new channel.
These questions suggest that even the more conservative members of the Court are unable to draw a connection between the plaintiffs’ claims and what the law can provide them. A fact that does not bode well for their case.
While many spectators agree that the winds appear to favor the platforms, it’s unclear what the Court intends to do here. Some of the Justices articulated potential rulings around the amount of knowledge a platform would need to meet the bar for aiding and abetting. Others suggested that if content were arranged in an unlawful manner (e.g., fraudulently) Section 230 liability might not apply. But any remedy not tied to the realities of the internet will have exactly the consequences many of the justices fear. The Court should show restraint and encourage Congress to do its work.
Dylan is a law student at Harvard University where he is a student member of the Federal Communications Bar Association and focuses on the intersection of law, public policy, and internet-powered technologies.
Prior to his graduate studies, Dylan spent time working on the social and technical sides of the Internet. He was a fellow at the Berkman Klein Center for Internet & Society and held several roles in content policy and operations at Facebook and YouTube focused on mitigating the risks of online hate speech, terrorism, and misinformation.