Analysis and Theory

Building the Field

Engineering and Policy Development

Trust & Safety in the Majority World Workshop  

April 3, 2024

April 3, 10:00 AM–April 4, 2:00 PM

This workshop will bring together professionals within the field of Trust & Safety (T&S) to discuss socio-technical issues that disproportionately affect the Majority World.

The Trust & Safety (T&S) in the Majority World workshop, hosted by the Institute for Rebooting Social Media and the Integrity Institute, will bring together professionals to discuss the unique concerns that arise in this field within the context of the Majority World. T&S operations for the Majority World have historically been under-resourced and deprioritized, leading to mistakes in content moderation, amplification of hate, and safety concerns in the products that should serve users in all markets equitably. As technology becomes ever more pervasive in our public and private lives, T&S equity is becoming an important human rights issue.

We will begin the workshop by discussing gaps and challenges in Trust & Safety today. However, our primary goal is to cultivate generative discussions about what aspects of T&S must be changed, improved, or  dismantled to make way for a better future. Based on the discussions from this workshop, we will produce a white paper with recommendations of continued work. 

Logistics

This workshop will span two days and focus on eight curated themes that address core issues within the Trust & Safety sphere; these topics were inspired by submissions from our initial call-out. Throughout the workshop, each 45-minute time block will have two topics discussed in parallel sessions, with each participant free to decide which session to attend. We hope for active participation from each attendee as we brainstorm and gather ideas throughout the day! We will end the workshop with final remarks on these ideas, and concrete suggestions for how to improve T&S in the Majority World. 

Themes 

1. Content Moderation: Censorship and Human Rights Violations in the Majority World  

The complexities in handling content moderation at scale are known, particularly concerning the use of automated systems. Past failures with the use of these automated systems (particularly with languages that are less widely spoken) demonstrate the significant impact on marginalized communities during crisis situations where these communities face heightened risks. We will be discussing ways to co-design these systems to better inform future design, immediate changes, and ways to ensure comprehensive and effective moderation strategies for moderators themselves.

2. Ranking & Data: The Risks and Dangers to the Majority World    
Data science, engagement metrics and ranking algorithms can directly shape priorities and behaviors. This session will discuss the problems and challenges with engagement based ranking and how and why it incentivizes behaviors that can contribute to conflict in the Majority World. We will ask how can we make the necessary changes to create incentives that are healthier and more equitable for society globally?

3. Collaborating against Digital Authoritarianism
Private tech companies have growing influence over matters of global governance, global affairs, international diplomacy, elections, wars, global economies, etc. Executive business frameworks are regularly employed by companies, but often struggle to effectively balance human rights, justice, or traditional models of civil governance in a scalable, sustainable manner. We will discuss how to identify the frameworks and power dynamics that disproportionately impact the Majority World and how we might mitigate the risks of digital authoritarianism. 

4. Product Design and Safety: Centering the Marginalized in Design Processes 
This session will look at the design of Trust and Safety systems, name the implicit values baked into these systems, and identify new values that center the human needs of moderators/T&S operators based in the Global South. We will also discuss the multitude of positive user experiences that are possible for the moderator if we leverage AI and machine learning.

5. Overstepping Authority: Addressing bias in tech product policies
How can we ensure that appropriate allocation of resources and crisis protocols are established to mitigate harm and ensure user protections are implemented and applied equitably? We will begin this session by drawing insights from recent Oversight Board decisions and recent incidents and companies responses to doxxing in conflict regions.

6. Coloniality and Imperialism: How this shows up in T&S Operations
Coloniality refers to long-standing patterns of power that emerged as a result of colonialism, that define culture, labor, intersubjective relations, and knowledge production beyond the strict limits of colonialism. Imperialism is the extension of power and influence into other areas and territories for economic or political gain. This session will discuss the nuanced ways that Trust & Safety operations replicate these frameworks of authority and how understanding this offers potential new pathways of innovating equitably.

7. AI Systems & Model Development:  Disproportionate Impacts on Majority World Countries 
In this session, we will discuss how many AI evaluations remain western centric and define a narrow view of what ‘fairness’ or ‘harms’ look like. We will be discussing the need for broader, cross-cultural perspectives in the development of AI systems to serve the Global Majority.

8. Internationalization and Localization: What are the Right Practices for the Majority World?
This session will focus on the general gaps, challenges and lack of localization of T&S work in the global majority, particularly within a content policy development context. We will discuss initiatives like Project Circuito, which aims to provide a local perspective to content moderation practices in Latin America, and will hear about examples at Wikimedia navigating local perspectives in decision making and what they have learned from it.