Analysis and Theory

A Herculean task: How do we save ourselves from social media?

February 2, 2024

This post was originally published on November 17, 2021 to RSM’s former Medium account. Some formatting and content may have been edited for clarity.

Social media has become a beast that no one can tame. When the internet was created, there was a utopian, neoliberal expectation that it could be a neutral, inclusive space where race, sex, and class did not matter, where everyone could have access, participate, and benefit. The concept was great; it offered a way of connecting to the world through a computer, but it was developed without any idea of how it might actually grow and evolve — or devolve — over time. “Social media” emerged in the same way, following the same mistakes. The creators and users did not think enough about the future of these technologies, what harms — individual or societal — might ensue, whether users’ data and privacy might be violated and monetized, or how such “free” services would function economically. The companies thought about the future in terms of business models and profits, which is… what companies do. Who has true ownership over digital spaces? As new problems emerge — violations of privacy, polarization, harassment, manipulation, misinformation — whose problems are these to solve? There is always rhetoric about how it is “for the people” — yet the people are not involved in social media’s governance. In the United States, the regulatory frameworks are extremely limited (imagine guard rails made of shoelaces), hardly enforced, and inconsistent across all platforms. Meanwhile, the problems to which social media gives rise grow stronger and bigger every day.

In Greek mythology, Hydra is a multi-headed water monster that is rumored to protect the underworld. Highly venomous and dangerous, when one of its heads was cut off, two more would grow in its place.

Social media is a modern day Hydra.

The challenge for society is to face this beast (hate speech, harassment, violence, propaganda, misinformation), but each time a head is cut off (content moderation, closing platforms, banning users), new heads appear where only one was before (doxing, new platforms, more violence). Sometimes there are so many platforms, problems, and people, it is hard to figure out where to even begin with how to make it a better space for everyone. And if social media is Hydra, are we looking for a modern day Hercules who can slay the beast? And if not Hercules, who, or what, do we need?

There are rarely solutions proposed that simultaneously address the multiple problems social media causes in society. I think we are dancing around the real problem: people. People are what make up the internet and social media. Yes, there are bots, but human beings had to create and program them. Yes, there are trolls waiting to pounce in the comments section, but those online personae are created and executed by real live human beings. Yes, algorithms behind the biases we see in search results, but humans are behind the algorithms.

People are the problem. It is hard to admit that we are the problem, but that has to be the first step in making positive change.

We cannot slay this mighty beast without acknowledging that the problems begin with us. This responsibility is not distributed equally (power never is), but the conversation usually points to an externalized, downstream problem, instead of the humans (and groups of humans) upstream. If we stop attempting to erase the presence of race, class, culture, and sex on the internet and instead acknowledge how large of a presence they have in our world, we could create better programs, software, and systems for everyone.

If we looked not at how to profit from people and instead looked at what people needed, perhaps we would find a kinder internet and actual communities that felt positive to everyone.

Instead of looking to the future of social media, we have to address the present. As an educator, I believe continuing education for the tech workforce is critically needed. We cannot recognize our implicit biases written into code and algorithms if we do not acknowledge, confront, and address them. One way to tackle implicit bias can be through education and training. Structural engineers are required to take continuing education courses in order to renew their license to practice. They must do so in order to understand current building codes and industry best practices. Why would we not require the same of people who are behind the critical infrastructure of the internet? Learning and training should not stop just because you completed a degree or a certificate program. The internet and social media are such an integral part of our daily lives that we should humbly acknowledge what it takes to maintain these platforms, and how their development has outsized impacts on society. Courses that address ethical, cultural, and historical impacts on the internet may not be required in college but they can and should be a part of continuing education for the tech industry.

In addition to education, actively recruiting and maintaining a diverse workforce is critical to improving the current state of social media. White men still dominate the industry, the board rooms, the C-suites. This can’t help the racial bias and discrimination baked into the internet. Including a workforce that has more women and minorities within every level — and particularly in positions of leadership — can begin to counteract some of the missteps of the past.

The first step in acknowledging the problem is creating accountability for what has been created and how it has impacted the world. CEO’s will sit in front of Congress and say they “welcome regulation” but, for some of the greatest innovators of the century, their efforts to self regulate or to create solutions have been too small, too late. Meanwhile Congress will spin in circles before creating concrete regulations and policies that benefit the people. Someone has to step up and take responsibility. Additionally, we have to broaden our readings and research to include women and minorities. Reading Safiya Noble’s Algorithms of Oppression is a great place to start, or Tressie McMillan Cottom’s work on race and racism in the digital society is another place to begin, but we (academics, educators, people who use the internet) have to do the work too. There are ways that everyone who uses social media can help to make it a better place, it is not the task of one man alone. Perhaps instead of seeking out some Hercules to slay this contemporary multi-headed monster, we should instead look for a mirror.

Dana Williams-Johnson is an instructor in the Howard University School of Business’s Marketing Department and is simultaneously pursuing her doctorate in Communications, Culture and Media Studies at Howard University.

This is an independent project developed while the author was working as a research assistant for the Institute for Rebooting Social Media at the Berkman Klein Center for Internet & Society at Harvard University. The Institute for Rebooting Social Media is a three-year, “pop-up” research initiative to accelerate progress towards addressing social media’s most urgent problems. Research assistants conducted their work independently with light advisory guidance from Berkman Klein Center staff.