Engineering and Policy Development

Platform Accountability: Developing Systems to More Meaningfully Assess and Mitigate Platform Harms

August 2, 2023

Assembly Fellow Nate Lubin recaps his fellowship project, a white paper which argues for a series of platform accountability mechanisms informed by the field of public health.

In 2020 and 2021, I launched several non-profit initiatives (principally through the Fellow Americans Education fund and the Better Internet Initiative) which created and measured the effects of digital content on public interest topics, including on issues like COVID-19 education during the pandemic. Much of that work, the evidence showed, was effective: certain messages, as judged by randomized controlled trials (RCTs), shifted views at the margins toward understanding the stakes of key issues. 

At the same time, though, we saw two trends that were notable, if consistent with other experiments. First, the content that drove the most engagement – likes, or time spent on a platform like YouTube – tended not to be more persuasive or effective at driving the message. And second, even content that effectively addressed important goals like providing accurate information about the pandemic could result in unintended negative effects, like reducing interpersonal trust. For example, some of the most effective videos at convincing people to take the pandemic seriously were first person accounts by medical professionals – especially nurses – describing the lived experience of being in a hospital during the spikes in cases; yet those same videos, as they were essentially accounts of civic peers not protecting their communities, reduced measures of social trust.

Seeing this data, I was convinced of a few things:

  1. Exposure to content can have measurable, if small, effects on defined populations.
  2. If randomized controlled trials can be used in laboratory settings with limited resources to assess socially important measures, the same methods might be implemented using the native RCTs of large platforms like Facebook, TikTok, and YouTube.
  3. There is an opportunity to leverage these methods to assess the effects of platform architecture, and not just specific pieces of content. In particular, this approach could take motivation from prior advocates like Rachel Carson, the author of Silent Spring, who is credited as one of the originators of the environmental movement.

With the support of Berkman Klein’s RSM and Cornell Tech, I’ve spent a good bit of the last eighteen months expanding on these ideas, talking with experts from industry (including a number of former employees of major platforms who were responsible for implementing and assessing product experiments), public health, and law/regulation, and colleague Tom Gilbert. And last month I was excited to launch https://www.platformaccountability.com, alongside articles in the Atlantic, Tech Policy Press, and Lawfare. Most substantively, we also posted the full white paper outlining the proposal, its context, and some of the motivating history of public health, as well as more directed briefs for product leaders and for regulators.
The full abstract of the paper is below. If interested in this work, particularly from either a product development, public health, or regulatory perspective, please reach out regarding upcoming workstreams. We will be holding convenings in the fall exploring specific implementation ideas, especially around detailed specs for metrics.

Accountability Infrastructure: How to implement limits on platform optimization to protect population health

Attention capitalism has generated design processes and product development decisions that prioritize platform growth over all other considerations. To the extent limits have been placed on these incentives, interventions have primarily taken the form of content moderation. While moderation is important for what we call “acute harms,” societal-scale harms – such as negative effects on mental health and social trust – require new forms of institutional transparency and scientific investigation, which we group under the term accountability infrastructure.

This is not a new problem. In fact, there are many conceptual lessons and implementation approaches for accountability infrastructure within the history of public health. After reviewing these insights, we reinterpret the societal harms generated by technology platforms through reference to public health. To that end, we present a novel mechanism design framework and practical measurement methods for that framework. The proposed approach is iterative and built into the product design process, and is applicable for both internally-motivated (i.e. self regulation by companies) and externally-motivated (i.e. government regulation) interventions for a range of societal problems, including mental health.

We aim to help shape a research agenda of principles for the design of mechanisms around problem areas on which there is broad consensus and a firm base of support. We offer constructive examples and discussion of potential implementation methods related to these topics, as well as several new data illustrations for potential effects of exposure to online content.