Analysis and Theory

Threads of Wisdom: Experts React to Meta’s Policy Changes

January 17, 2025

The Institute for Rebooting Social Media gathered insights from leading scholars, tech policy experts, and industry professionals on the immediate and long-term consequences of Meta's recent policy changes.

Meta’s recent overhaul of its content moderation approach marks a significant shift in platform governance. The changes include ending fact checking to adopt a Community Notes model (previously popularized on X), reducing content restrictions on political topics, such as immigration and gender identity, and announcing the content moderation teams were being moved to Texas.

The changes are controversial, and have generated concerns that hate speech and other problematic content will spread more easily on their platforms. However, Meta’s Chief Global Affairs Officer Joel Kaplan has justified the changes by stating, “Too much harmless content gets censored, too many people find themselves wrongly locked up in ‘Facebook jail,’ and we are often too slow to respond when they do.”

These changes raise important questions about the future of online discourse, platform governance, and the role of social media in shaping public conversation. To explore these implications, the Berkman Klein Center’s Institute for Rebooting Social Media gathered insights from leading scholars, tech policy experts, and industry professionals who study digital platforms and online communities. Their analyses examined both immediate impacts and long-term consequences for users and the broader health of public discourse.

Respondents:

Katie Harbath, Founder & CEO, Anchor Change (former Director of Public Policy, Facebook)

Kate Klonick, BKC Affiliate; 2022-23 RSM Visiting Scholar; Associate Professor at St. John’s University Law School

Rob Leathern, BKC Affiliate; Founder, Trust2 Consulting (former Senior Director of Product Management, Facebook)

Sam Lessin, General Partner, Slow Ventures (former VP of Product, Facebook)

Paul Resnick, RSM Visiting Scholar; Michael D. Cohen Collegiate Professor at the University of Michigan School of Information and Director of the Center for Social Media Responsibility

Dave Willner, BKC Affiliate; Fellow, Stanford Cyber Policy Center (former Head of Content Policy, Meta)

The responses below have been edited for length and clarity.


What are the anticipated short and/or long-term impacts of Meta’s recently announced policy changes related to fact-checking and content moderation?

Katie Harbath (excerpted with permission from her recent Substack about the policy changes, “Overlords of Overwhelm”): These announcements reflect a pattern: announce sweeping changes, figure out the details later, and rely on the media and others’ fractured attention span to avoid scrutiny. According to the New York Times, Zuckerberg kept the circle of decision-makers small to prevent leaks, which allows them to move fast but doesn’t allow for much debate, the opportunity for other solutions to be presented, nor many of the details about how any of this will work in practice to be worked out yet.

Another component of these tactics is to focus people on one particular issue that they can understand, while more meaningful changes that are harder to understand receive less scrutiny. In the case of Meta, much of the focus is on eliminating the fact-checking program in the United States instead of changing the policies and proactively looking for potentially violating content.

Kate Klonick: I’m hearing from people on [Meta’s] governance and trust and safety teams that they were caught off-guard by the news that Zuckerberg announced last week. Fact-checking is not a huge part of Facebook’s content moderation process already, so this is not necessarily the huge sea change that Zuckerberg tried to sell it as. And neither is the “move” of content moderation to Texas from California. Something like 80% of Meta’s content moderation and trust and safety work was already being done in Texas (mostly Austin), so again, this change is perhaps overstated. Ultimately I believe the effects will be small on the platforms themselves, and greater for their performative effects for politics, and how we all engage in political speech generally, not just online.

Rob Leathern (excerpted with permission from his recent Substack about the policy changes, “A Custom Filtered Future”): In the near future, we’ll likely see a baseline set of rules put in place to address mostly illegal content and content that pushes people off of platforms and reduces engagement. These might be industry-wide requirements or platform-specific policies that handle the worst of the worst (child exploitation, explicit illegal content, etc.). Once that baseline is enforced, though, there’s room for individuals to opt into more specialized filtering solutions that could be offered by platforms directly but also by third parties. In being able to quickly understand text, images and video, LLMs have recently made the costs of doing this come down by several orders of magnitude and still falling – meaning almost anyone could publish their own filter/model/algorithm.

This will have obvious downsides – it’s already hard to know what’s happening inside of anyone’s feed as it is; it’s possible in a future personally-curated world that not even the platforms themselves would know or understand what a given user was seeing. It also has the same problems with fraud, abuse, and variable security and privacy protections that today’s App Stores have.

Paul Resnick: This is not the end of content moderation, even for misinformation. Platform-driven promotion and demotion of content (sometimes called “algorithmic curation”) is here to stay, because it produces content streams that people like a lot better than unfiltered content streams. But it’s the end of an approach to misinformation that pretended that all judgments were objective. Instead, new approaches will acknowledge and even embrace subjectivity and disagreement. There’s a lot that Meta can learn from X’s Community Notes in this regard. It employs a clever “bridging” vote-counting algorithm that approves proposed notes when they are upvoted by people who don’t usually agree with each other. 

But Meta needs to be careful. There are a lot of risks of manipulation and over-enforcement in a system like Community Notes. X had the benefit of developing their Community Notes system slowly and making small adjustments, building the community’s confidence in it over time. It will be interesting to see whether Meta can avoid the many pitfalls that could arise during implementation.

(You can get a feel for the kind of notes that the X community approves, and those they don’t, by browsing through examples at the Community Notes Monitor.)

Dave Willner: The [ending fact-checking] was the headline-grabbing change that everybody focused on, but to me it is the least important thing they announced. Far more important news has come out since, as well as some of the things they said at the time. 

For one, Meta is turning off all their attempts at misinformation classification, even in feed ranking. So, it’s not just that they’re not flagging content, they’re also not downranking anymore. That seems like a way bigger deal to me than the fact-checking as such, in terms of the amount and spread and exposure of this kind of content.

Two, the changes they made around hate speech are way more worrying. The move from proactive flagging and removal to reactive flagging and removal, it’s going to mean they miss a lot, and it’s going to mean they miss a lot inside communities that are aligned around particularly dangerous content. They’re throwing the baby out with the bathwater in a way that can lead to some toxic, isolated small-group conversations.

How might these policy changes impact the current social media landscape? What might we expect to see from other platforms?

Rob Leathern (excerpted with permission from his Substack, “A Custom Filtered Future”): One of the most interesting possibilities here is the emergence of a market for filters, where individuals and organizations can produce custom sets of rules and algorithms. These “client-side” solutions would reframe trust and safety as a shared responsibility between platforms, which keep out the truly unlawful or harmful, and users, who can choose which additional filters align with their personal values and preferences. There are some places where this becomes trickier, like ads – because of the monetary incentives involved – but I could also see hybrid subscription/ad models emerging which give users a greater deal of control over the type and frequency of ads as part of a future “filter market”.

Sam Lessin: The internet is a very large and diverse place. People have conversations everywhere. There will continue to be churn and people will continue to find new places to have conversations, in various ways, shapes, and forms. But I don’t think there’s going to be anything seismic. These things will keep evolving. One thing people don’t fully grasp from a product perspective is, for someone to leave a network and go to [a different platform] it’s an incredibly heavy lift. It’s like leaving your home.

Katie Harbath (excerpted with permission from her recent Substack about the policy changes, “Overlords of Overwhelm”): As we grapple with these changes, we must also look at them through the lens of artificial intelligence, which will play an increasingly critical role. AI tools are already used to manage information overload, summarize content, and generate news. They are being updated and released at a dizzying pace. Our world will look dramatically different in three years than it does today. As AI continues to be implemented in more and more places, how do policy changes like these affect what AI generates for us? As we work to influence how companies train their models and what guardrails they put into place, we will need to develop policies and red team or pressure test these models on how they handle issues of immigration and gender, amongst many others much faster.

Paul Resnick: For extreme content, I think social media platforms will continue to use the same policy playbook they have honed for the past 15 years. 

But for content types like misinformation and civility, where the exact boundaries are hard to describe and there is significant disagreement about where they should lie, I think other players will follow Meta’s lead in declaring that some other approach is needed, one that will be perceived as having greater public legitimacy.

Dave Willner: I think it may also fuel further moves towards decentralization. I’ve historically been pretty skeptical of the Fediverse because I did not think that small proprietors were going to prove to be capable of doing the level of moderation you have to do, even in a federated system. I’m more optimistic now. 

What might these changes mean for the future of civil discourse, both online and off?

Rob Leathern (excerpted with permission from his Substack, “A Custom Filtered Future”): Platforms will likely still bear the responsibility of enforcing core rules against illegal content, but real innovation may be possible in user-driven, client-side filters that give each of us the power to choose how we want to engage with the internet. (…) If users apply drastically different filters, do we lose a sense of shared reality? Will each of us be sealed into our own bubble?

Sam Lessin: The problem is that civil discourse in the physical world and the digital world is dramatically different. In the physical world, you speak to the people around you. Those people that are physically near you, almost by definition, have a spectrum of beliefs, a spectrum of positions, it’s not all the same people, and that forces you to have a fairly healthy and moderate dialogue. 

The physics of the digital world are quite different, infinite. The way to build community is more to say or believe whatever you’re going to believe and route to the people who agree with you rather than engage with the people physically proximate to you. 

So the physics changes into this polarity thing. No one will agree with everything, but now at least you will see it [content] and can engage with it to some degree.

Paul Resnick: Elevating content that appeals to people who haven’t agreed in the past may be the key to promoting civil discourse. The success of the vote counting algorithm in Community Notes may inspire analogous uses outside of misinformation moderation.

Imagine a world where getting some grudging upvotes from across the aisle helps make your content go viral. It would upend the current incentive structure that favors polarizing content.