Building the Field

Engineering and Policy Development

From Emergency to Prevention:
Protecting Journalists from Online Abuse

December 1, 2022

How can online technologies evolve to provide better safeguards to journalists? PEN America and RSM hosted an interdisciplinary workshop with reporters, academics, civil society experts, and platform representatives to brainstorm solutions.

Arzu Geybulla, a journalist based in Azerbaijan, is internationally recognized for her explorations of the interplay between technology, politics, and gender rights. In an ideal world, she would devote all of her professional energy to reporting, writing, and sharing her results with the world. Unfortunately, for over a decade, Geybulla has faced an onslaught of coordinated harassment, including death threats and doxxing. “When online harassment hits, it’s like a rollercoaster,” Geybulla says. And unfortunately, Geybulla is not alone—almost two-thirds of female journalists have experienced some form of online threat, and other types of identity-based abuse are common. Such online harassment is a growing threat to open, democratic societies everywhere. How can online technologies evolve to provide better safeguards to journalists?

To brainstorm solutions, PEN America and the Berkman Klein Center‘s Institute for Rebooting Social Media (RSM) hosted an interdisciplinary workshop during the summer of 2022. The event brought together reporters, academics, civil society experts, and representatives from Facebook, Instagram, Twitter, Google, TikTok, and Discord. Attendees shared their perspectives about the online abuse of journalists, and identified concrete proposals for mitigating that abuse. We summarize the discussions below.

Harassment continues to be pervasive and damaging

A large body of research demonstrates that harassment of online journalists is a severe and ongoing threat. For example, in a 2019 survey from the Committee to Protect Journalists, 90% of reporters from the U.S. cited online abuse as the biggest safety threat for journalists. A 2020 UNESCO-ICFJ survey of female journalists worldwide found that 73% of respondents had been harassed online; 26% of respondents reported a negative impact on their mental health, and 20% reported being attacked offline in connection to the online violence that they had experienced. 30% also censored themselves on social media in an attempt to reduce the likelihood of future harassment.

Attendees at the RSM/PEN America workshop corroborated these findings. Gisela Pérez de Acha, a reporter, cybersecurity expert, and lecturer at UC Berkeley, described the psychological isolation of facing online abuse alone, without adequate assistance from the platforms and without easy ways to enable trusted friends to help deal with the bad actors (e.g., by blocking or muting). Arzu Geybulla agreed, saying “What makes it the most difficult for me is to face this situation without any support from the platforms. I am an independent journalist – how can I tackle this issue while no action is taken when I report abusive content violating their own community standards?”

Importantly, the online harassment of journalists “is not specific to authoritarian regimes,” said Jeje Mohamed, one of the voices of the Arab Spring who now serves as a Free Expression and Digital Safety Program Manager at PEN America. “We observe exactly the same trolling tactics in the U.S. against journalists, in particular against female journalists of color. Online abuse is not confined by borders.” Workshop attendees agreed that the international scope of the problem necessitates platform changes that are also international in scope. Such changes are complicated by the fact that each platform has users and data centers that span multiple geographic locales, legal jurisdictions, languages, and cultural traditions, all of which must be considered when defining, preventing, and mitigating abuse.

Platforms are improving their protection mechanisms, but more work is needed

To their credit, online platforms have begun rolling out changes to mitigate online harassment. For example, in 2021, Facebook revised their “public figures” policy to offer stronger protections for high-profile individuals, particularly  journalists and human rights activists; the new policy restricts the kinds of abusive posts that can be made about such figures. Earlier this year, Google’s Jigsaw team (which researches and pilots technology for making online communities safer) open-sourced a Harassment Manager tool. The tool allows a social media user to automatically detect potential abuse content, rate it by “toxicity” level, document it, and then bundle that problematic content into a report that can be shared with platform operators, civil society organizations, and employers.

Despite this progress, the online world is still a harrowing one for journalists. A key problem is that most platform-provided anti-abuse features still place the burden of abuse mitigation on the user— in-platform features offer no easy way for a user to enlist friends to assist with blocking bad actors or documenting instances of abusive posts. An ecosystem of third-party tools try to fill the gap. For example, Block Party allows a user to create automatic filters for abusive Twitter content, and to recruit trusted allies to review that filtered content and take action on it (through muting or blocking, for example). However, third-party tools are only a stop-gap measure; native platform support is needed to ensure that abuse detection and response is optimally integrated with the rest of the platform’s code and human infrastructure.

The importance of native anti-abuse tools was a recurring theme at the workshop. Viktorya Vilk, Director of Digital Safety and Free Expression at PEN America, said that “platforms have been piloting significant and useful features like Twitter’s Safety Mode, Instagram’s Limits, and Facebook’s Moderation Assist. However, there is still a great deal of work to do.” A core reason is that social media companies rarely integrate their platform-specific anti-abuse features, meaning that successful approaches for abuse mitigation on one platform cannot be directly applied to other platforms. As Pérez de Acha stated at the workshop, “We should get these [varied anti-abuse] functionalities on every platform.”

Moving forward: The needs of journalists and the constraints of engineers

As James Mickens, Harvard professor and co-director of RSM, observed, “Modern online platforms are extremely complicated, with millions of lines of code running on tens of thousands of datacenter machines and accepting data from millions of good actors and bad actors. Preventing abuse at that kind of scale is hard. Not impossible—but hard. Devising practical anti-abuse mechanisms will require engineers to talk to those who suffer from abuse and study how abuse happens in the real world. A primary goal of the RSM/PEN America workshop was to facilitate those discussions.”

The workshop was MC’d by Joanne Cheung, a lecturer at the Hasso Plattner Institute of Design at Stanford Engineering and a former strategy advisor for RSM. After an initial roundtable discussion with three journalists who had firsthand experience of online abuse, workshop attendees were placed in two breakout rooms. Both rooms brainstormed about native platform features for keeping journalists safe. However, the “Emergency” room focused on tools that would assist journalists who are in the midst of an active harassment campaign; in contrast, the “Prevention” room focused on new mechanisms for proactively identifying abusive content or building preemptive “moats” around individuals who are likely to be the targets of abuse.

Emergency: Supporting journalists during online attacks

Proposal #1: Native response delegation via trusted circles

For a journalist in the midst of an online attack, the burden of reporting posts and blocking users compounds the emotional damage of the attack itself. Ideally, journalists could use native platform functionality to delegate abuse mitigation tasks to trusted contacts. In contrast to third-party solutions like Blockparty, native functionality would be fully integrated with a platform’s primary user interface and with preexisting infrastructure for authenticating delegated users and forwarding abuse complaints to in-platform reporting channels. The platform technologists at the workshop agreed that such native integration is possible, but mentioned that careful design would be required for the user interface that would allow delegation.

Proposal #2: Auto-suggesting the best mitigation strategies

Journalists at the workshop observed that, during an attack, journalists may struggle to identify the optimal mechanisms for addressing the abuse. Even if a platform supports native mitigation features, a journalist (or a delegated contact) may be unaware of all such features or their intended usage. Thus, the journalists at the workshop suggested that platforms create more usable interfaces for native anti-abuse functionality. A specific proposal was for an “auto-suggest” feature that would be triggered by an initial mitigation action like the blocking of an abusive user; the auto-suggest feature would propose additional mitigation strategies, e.g., by surfacing tools that facilitate documentation or tighten privacy and security settings.

Proposal #3: Crisis concierges for report escalation

In some cases, online harassment can be thwarted solely via mitigation features surfaced via a platform’s user interface. However, in many incidents of abuse, content moderation systems fail to act on the content reported as abusive, and require the  journalist to engage in direct conversations with platform employees to explain the details of the abuse. For example, if abuse is happening in a linguistic or cultural context for which the platform has little expertise, a targeted journalist may need to explain this context before the platform can deploy a relevant mitigation strategy. Unfortunately, platforms currently lack native interfaces for initiating and sustaining these discussions. 

A journalist being abused can ask civil society organizations like Access Now to escalate an abuse complaint with a platform. However, these advocacy groups have limited resources (and often themselves struggle to elicit action from platforms). A journalist can also ask personal contacts at a platform company to escalate an abuse complaint, but not all journalists have such contacts. Even if a journalist is successful at establishing a direct conversation with platform employees, there are scalability constraints, because even well-intentioned platform employees often lack the organizational authority to mobilize other people or resources to adequately address abuse. 

At the workshop, journalists proposed that platforms hire “crisis concierges” who would understand the internal ways that platforms escalate abuse reports, but also possess expertise in both trauma-informed, context-specific support. “There should be some way for someone already in the bullseye to have access to human support,” said Nadine Hoffman, the International Women’s Media Foundation’s Deputy Director. Julie Owono, the Executive Director of Internet Sans Frontières and a member of the Facebook Oversight Board, pointed to the Digital Rights Foundation (DRF) as an inspiration. The DRF operates the first toll-free helpline for victims of online harassment and violence in Pakistan; the DRF also provides anti-harassment training to journalists and human rights activists, centering the advice around the specific cultural and linguistic contexts encountered in Pakistan. 

Prevention: Reducing exposure to online attacks

Proposal #1: Trauma-informed training for engineering & design teams

Journalists at the workshop stated that trauma-informed abuse education should not just be given to platform employees in the direct escalation path for abuse complaints—trauma-informed training should also be provided to the employees who work on platform infrastructure and user experience design. Examples of such infrastructure are the user-facing code that implements abuse reporting mechanisms, and the platform-internal machine learning models that try to proactively identify abusive content. The lived experiences of journalists and other high-profile abuse targets provide critical insights about the scale and intensity of abuse that platform engineers must identify and shut down.

Proposal #2: Erecting barriers to cross-platform abuse amplification

A journalist often has accounts on multiple social media sites. As a result, abuse often plays out across multiple platforms, with harassers simultaneously targeting the same victim on different sites. Even if a journalist only receives abusive messages on a single platform, the attack may have been planned on another one. Such “cross-platform brigading” is “a key issue that needs to be solved,” according to Michelle Ferrier, founder of TrollBusters, a service which helps journalists to respond to harassment campaigns. Journalists at the workshop urged platforms to be proactive in exchanging data about abuse; journalists hoped that these exchanges would spur shared best-practices for thwarting both single-platform abuse and cross-platform brigading. Granting external researchers access to abuse data would also allow academics and other members of civil society to partner with tech companies to better understand and combat abuse.

Proposal #3: Press badge credentials for journalists

The most controversial proposal was to have platforms verify the identity and occupation of self-proclaimed journalists, such that platforms would automatically give verified journalists a heightened set of abuse defenses (and/or more efficient escalation channels for abuse). Several workshop attendees liked the idea of journalists receiving the strongest level of abuse protections by default. However, most attendees thought that, as a practical matter, social media companies should not become the arbiters of who is or isn’t a “journalist.” Most attendees also felt uncomfortable with any kind of verification process that forced all journalists, without exception, to divulge real-life identifying characteristics like names and home addresses and official employers. Being a journalist is physically dangerous in many countries, and workshop attendees worried that governments in those countries might strong-arm platforms into revealing sensitive personal information about journalists.

Next steps

At the conclusion of the workshop, multiple attendees reaffirmed the importance of the workshop’s multi-stakeholder approach. Attendees were also excited by the multiple proposals that emerged from the breakout sessions. PEN America and RSM are currently planning follow-up events to flesh out the most promising proposals and to further brainstorm about ways to safeguard journalists online. To join the ongoing discussions, contact Elodie Vialle, an RSM Assembly Fellow and a Consultant for Digital Safety and Free Expression at PEN America. We invite you to learn more about PEN America’s current slate of online abuse defense programs; we also encourage you to sign-up for updates from RSM to learn about upcoming events, research announcements, and opportunities to get involved.

This blog post was written by Elodie Vialle, Toni Gardner, James Mickens, and Viktorya Vilk. The workshop was organized by Elodie Vialle, Viktorya Vilk, Joanne Cheung, Toni Gardner, and James Mickens. Attendees participated in the workshop under a modified version of the Chatham House Rule which allowed individuals to indicate whether or not they wanted their comments to be attributed; this approach was inspired by Kendra Albert’s essay “The Chatham House Should Not Rule”.