Analysis and Theory

Student Groundwork

February 2, 2024

Initial explorations from research assistants at the Institute for Rebooting Social Media.

This post was originally published on November 17, 2021 to RSM’s former Medium account. Some formatting and content may have been edited for clarity.

There’s emerging consensus that the digital social space is broken. Can it be repaired? How? This fall the Berkman Klein Center launched The Institute for Rebooting Social Media, a three-year “pop-up” research initiative to accelerate progress towards addressing social media’s most urgent problems.

To start spinning up the Institute, a team of staff and research assistants investigated the existing landscape of academic and civil society efforts to improve and reimagine social media, exploring a number of key topics including privacy, harassment and harmful speech, misinformation, and intermediary liability laws, particularly section 230 of the Communications Decency Act.

Out of this broad mapping effort, research assistants developed their own research questions and short independent projects, which are summarized below.

Addressing social media harassment against journalists: preventing chilling effects for democratic discourse

Klaudia Jazwinska, a journalist by training and a researcher at the Center for Information Technology Policy at Princeton, examines how social media is used to harass journalists. She argues that rampant harassment of journalists on social media poses a genuine threat not only to individual journalists, but also to the journalism profession and democracy. She agrees with Silvio Waisbord, a professor of media and public affairs at George Washington University that online harassment is a form of “mob censorship.” If journalists are hesitant to cover subject areas that might provoke harassment from online trolls, there could be a chilling effect on individual reporting, which would have serious, broad implications for the future of free and open democratic discourse online. She argues that maintaining a social media presence is in most cases implied or explicitly required of journalists, and she calls for both media outlets and social media platforms to take action to ensure that such a necessary aspect of the profession does not expose journalists to risks of violence or convince them to self-censor. Organizations have released resources and digital safety guides to address online abuse in recent years. However, responses and solutions to such a pervasive issue are often focused on individual actions despite harassment of journalists being a “systemic, information ecosystem-level problem.” Jazwinska concludes that online harassment must be addressed on a systemic level, not individually.

Increasing media literacy: a people’s guide to digital resiliency

Olivia Owens, a recent graduate of Indiana University, Bloomington where they studied Sociotechnical Conflict Analysis, created a “digital resilience toolkit” as a resource for readers to increase their media literacy, understand their possible rights and responsibilities as internet users, and protect themselves against a variety of online threats. The toolkit begins with a glossary of key terms and definitions to help readers develop a shared language with which to facilitate more productive conversations around digital resilience among regular internet users and experts alike. The toolkit also provides summaries and links to multimedia resources that readers can use to educate and protect themselves against online dangers. Lastly, the toolkit proposes a “Social Media Users’ Bill of Rights and Responsibilities” for individual users to consider. The list of rights that users could ask of platforms includes assurances that “violent, untruthful, and/or harassment-based content” will be dealt with promptly and that individual users’ data and personal information will be protected and not sold or shared without explicit consent. In turn, users’ responsibilities would include actively avoiding spreading false information, to report disinformation and harassment, and to expand their feeds by engaging with media that challenges their preconceptions or biases.

Exploring Section 230: rethinking content moderation today to prevent overly restrictive legislation tomorrow

Trisha Prabhu, a senior at Harvard College interested in regulation of online speech and digital harassment, explores the incentive structures created by Section 230 of the Communications Decency Act — a piece of U.S. federal legislation that frees platforms from liability for much of their users’ speech and behavior on the platforms — asking whether these protections actually incentivize platforms to discourage and moderate harmful speech, as the creators of the law intended it to. She concludes that Section 230 works to incentivize platforms to invest in content moderation in most cases, but “imperfectly.” Prabhu summarizes several high-profile court cases that show that when the impacts of Section 230 “goes wrong,” it can go spectacularly wrong and lead to serious individual harms, disproportionately for people in marginalized groups. The article concludes that platforms need to rethink how they approach content moderation in order to prevent Congress from enacting more restrictive legislation in the future.

Imagining social media as a modern day hydra: while addressing one issue, new issues arise

Dana Williams-Johnson, an instructor in the Howard University School of Business’s Marketing Department and a doctoral student in Communications, Culture and Media Studies at Howard University, explains in her piece how social media is like a modern day hydra. It seems like each time an intervention is made to make social media a better place (content moderation, closing platforms, and banning users), new issues arise (doxxing, new platforms, and more violence). She contends that continuing education for the platform technology workforce is a critical step to taming the multi-headed beast. Professionals in other fields such as structural engineering are required to take courses before renewing their licenses; professionals behind the critical infrastructure of the internet should be no different. She further suggests that actively recruiting and maintaining a diverse workforce is equally critical. Williams-Johnson concludes that, ultimately, the task of slaying the pervasive problems of social media and the Internet more broadly is not up to a “Herculean” individual alone. Rather, it is a collective mission for academics, educators, and everyone else who uses the internet.

Learning from Parler?: sharing user influence data publicly

Daniella Wenger, a first year student at Berkeley Law, explored whether Parler, a social media platform that caters primarily to a right-wing user base, might offer lessons on data transparency. Parler markets itself as a free-speech alternative to the comparatively aggressive content moderation practices of large social media sites like Facebook and Twitter. In the days following the January 6th insurrection on the U.S Capitol, the Google Play Store and the Apple App Store removed the Parler app, and Amazon Web Services (AWS) suspended Parler from its web hosting services for its refusal to remove posts inciting violence.

Her research suggests a number of lessons from studying Parler and the potential implications of Parler’s AWS ban for the future of internet legislation, particularly amendments to Section 230. Wenger argues that in the wake of the insurrection, takeaways from Parler’s deplatforming could lead to a paradigm shift, prompting Congress to force social media platforms to, at a minimum, report anticipated acts of violence to authorities. Another lesson is that Parler, unlike many other platforms, shares through its public API users’ “virality scores”, which are based on a combo of upvotes and downvotes. Wenger argues that this practice could set a precedent for other social media companies to publish data related to the potential popularity and influence of users on their platforms. This might be useful in better understanding the flow of viral content and moderating content more stringently as it becomes more popular.

Thank you to our first group of research assistants!

We look forward to building on their and our team’s initial learnings over the course of the Institute, as we dig into big questions like:

  • How can social media sustain and enable healthy communities, broader public conversations, and democratic participation for the public good, while minimizing harms?
  • What new institutional arrangements and governance models are needed to ensure social media serves democracy?
  • What kinds of regulatory frameworks, corporate policy, and new business models might encourage these shifts?
  • What might “due process” for online speech look like?

We’re excited to collaborate and co-create the Institute, so please be in touch with ideas and related work.

The Institute will be hosting a range of research projects and programming, including events like Private Social Media Data in the Public Interest: What’s Next? and the call for applications for faculty visiting scholars (closing Friday, November 19, 2022).

This rundown on research assistant work at the Institute for Rebooting Social Media was written by Casey Tilton and Hilary Ross.