The social media landscape has changed tremendously over the last few years. What might it look like in the years to come?
In an interview with BKC staffer Jay Kemp for RSM’s Reboot, Rebuild, Reimagine, Faculty Co-Director James Mickens tackled questions on topics related to AI, platform transparency, and emerging trends in social media research.
The transcript below has been edited for clarity.
JAMES MICKENS: I mean, I can only pick one apparently. There’s a lot of big challenges. I think one challenge that you see—not just in social media, but across the tech spectrum— is this notion of a very small number of companies having a lot of concentrated power. You certainly see this in social media, but you also see this in a variety of technological domains. I think that’s problematic for a variety of reasons. First of all, I think it’s problematic from the user privacy perspective, because all of your data is stored by a very small number of companies that can see what you do all across that company’s ecosystem. I think it’s also problematic from the innovation perspective. When you have a very small number of players dominating markets, that hurts the speed with which new products get made, and it makes companies less responsive to users.
Also, I think that there are a lot of challenges involving AI general and this also sort of across class, different, specific verticals. For example, there’s been this theory called “the dead Internet theory”, which has been around for a while. It basically says that most content is produced by bots and most of the people you talk to are actually bots. This was sort of like a tinfoil hat thing for a long time. Now I think it’s real. So now, I think it’s time to take the tinfoil hat off. You look on Reddit or TikTok, and there’s increasing, well-founded concern that a lot of the content that’s being generated is fake and is designed purely for engagement, but doesn’t actually connect you to a real human on the other side.
MICKENS: I think that one of the big problems that users currently feel on social media networks is that they don’t understand how the content that’s on their timeline was actually chosen. People end up getting frustrated with this because they say things like, “why am I not seeing posts from my family or from my friends?” Or even if people don’t like their family or friends, “why do I see this ad? Why is it following me around the internet? Why am I seeing a video from this influencer or not another influencer?” So I think transparency, or a lack thereof, is actually one of the big frustrations people have with companies in the social media space. And I think that’s one of the reasons why you see people starting to migrate to some of these nontraditional, more decentralized platforms like Bluesky that, at least in theory, promise you more control over what you’re seeing and more visibility into why you’re seeing it.
With respect to mechanisms, that’s tricky because I think that if you look over the past three or four years, the trend is for these platforms to become less observable. Various companies are shutting down their APIs that would allow researchers to see what’s going on, or they’re gatekeeping them behind very high fees, which place those APIs out of the reach of regular people. So I think that observability increasingly will come down to two things. First of all, regulation that might force companies to provide more data, otherwise comply with transparency requests from governments, researchers, and users. And then also, the notion of crowdsourcing data—taking like-minded users who want to band together and share their data to help researchers understand what’s going on. I think that’s also another promising approach because, at the end of the day, the stuff that you see on your own feed, presumably you should have some right about what’s done with that stuff.
MICKENS: Well, I think a very important aspect of that is making sure that there is good community building between the researchers. I’m a member of a group called The Coalition for Independent Tech Research. That group tries to act as one of these community-building forums where sociologists, public health people, computer scientists, lawyers, and journalists can come together and talk about what’s important, and in some cases, provide solidarity if, for example, one particular project is being attacked by the companies or by politicians. So I think that community building is actually going to be quite critical as we move into this next phase of technology governance because, as I mentioned earlier, there’s increasing centralization of the Internet and the various technologies that deploy on top of it. So I think the only way to counterbalance that centralization is to create other power centers, other collections of people who have power through their collective strength.
One of the things that I think is great about RSM is that it has provided one of these watering holes and has provided one of these shared spaces where people who care about social media and technology writ large can find each other, present their work, can learn from each other, and just share that sense of community. I think that one of the things that has characterized the past four or five years in the technology industry is that people on the Left, Right, and Center, politically speaking, all think something has gone wrong. There’s often disagreement over what specifically has gone wrong and how to fix it, but people think that the status quo is not. And I think the role of a place like RSM that’s situated in academia is to encourage that type of critical analysis of technology and to be open to both the possibility that technology has helped society in a large number of ways. But also, we don’t want to be pure techno optimists; we don’t want to just say, “the tech industry will figure it out.” We want to actually apply a critical lens so that technology can fulfill its promise without creating a lot of collateral damage upon users, groups, nations, or languages that weren’t traditionally considered by the tech industry.
MICKENS: I think the best way to create technology for the public good is to make sure that the public good is considered at every step of the engineering process—starting from ideation, then going through design, then to implementation, and then to deployment, and then to a product roll down. I think that a lot of engineers just want to build stuff. And so, just like with security, where a lot of engineers think, “I’ll build it, then I’ll make it secure”, I think a lot of engineers think, “I’ll deploy this thing once I have users, then I’ll make it safe.” We now know that doesn’t work. So I think it’s critical, in both our schools and inside our companies, to be educating engineers and making sure that they are actually thinking about the public good at every step of the work.
MICKENS: I am somewhat surprised that we haven’t seen more aggressive disconnections from social media. I think that people will feel stuck. People will take a detox for a bit and then realize there’s actually content and people that I want to keep up with. And so I think society hasn’t quite figured out how to deal with that fraught relationship of how do I keep the parts of social media I like, but then ditch the parts I don’t?
MICKENS: I think that open source projects provide a really interesting case study here. With open source projects, everyone can look at the code, and in many of these projects, in theory at least, anyone can contribute stuff there. So I think that, when you look at a lot of open source projects—let’s take Linux, for example. Linux has to serve a very large user base, and Linux is open source. So there are certain parts of the Linux community that are very open to community feedback. But there are other parts of the Linux community where essentially fiefdoms have grown up. So there’s a small set of people who own a particular feature, and they get very upset when people suggest changes to them. And so I think that open source to me is both a success story that points out some of the governance challenges. And ultimately, what lies at the heart of your question is governance. How do we take a technological product and allow users to have a say, not just engineers or managers? And that’s an open question.
MICKENS: AI is everywhere. That’s both good and bad. I think that there is an economic question about should it be everywhere independent of what benefits I might do. I think that economic question is largely being driven by shareholders who don’t want to miss out on the next big thing. I think there were some shareholders who thought that they missed out on mobile. Like, there are some shareholders who think that they missed out on search. And I think that there are now a lot of people who are worried that if they don’t go AI for everything all the time, 24/7, they’re going to miss out on the next big thing. So I think that there are powerful economic reasons driven by fear that are pushing AI into a lot of different products.
Now, should that be done? Well, I think the results are currently mixed. I mean, there are some things that I can do that are quite amazing. So this whole idea, for example, that I can speak in one language and then have my voice speak in a different language in near real time, that’s amazing. That used to be sci-fi stuff. I mean, you see this on Star Trek, and you tell your parents, “that’s amazing!” And they’re like, “yeah, in the year 2525 that stuff will be here.” Well, it’s here. So that stuff is great. But let’s take LLMs and generative AI, the fact that now we have these models that can generate realistic looking text and images. There are a ton of problems there. First of all, what was that model trained on? Were all of the people who gave their data for training, were they even notified? Were they compensated? Do they have any right to the earnings from the model? There’s all kinds of really thorny legal, economic, and computer science problems there. Furthermore, when models produce output, how is it detectable as model generated output?
This is a key challenge when trying to understand what authenticity means now on the internet. A big problem that you’ll see in a lot of online media now is, let’s say you go to some Reddit thread or you go to some Facebook post about some movie. And then you see people talking about that movie. You don’t know if those are actual people or not. And increasingly, what you’re seeing in these types of threads are people sometimes accusing other folks of being bots. One commenter will say something that someone else doesn’t like and they’ll say, “nice comment, bot.” And it’s not always clear if they mean that just merely as an insult or as an actual existential question, like you were actually, a piece of code running somewhere in a data center. And so these types of problems are really making it difficult to understand what is authenticity on the Internet. I like to think of myself as being pretty savvy from the computer perspective. I got a degree in it. And I’d always used to look at ads and I’d tell my friends and parents, “oh, that’s definitely CGI; that’s not real.” Now, sometimes I look at stuff and it’s actually quite hard to tell. I think that has a significant impact on the usefulness of the Internet in general.
Over the past two or three years, there was this very popular hack for making Google searches work better, which is you type in your search term and then add “Reddit” at the end. This was a popular hack because the thought was that this would then get you to online discussions that are more likely to be generated by humans. But the utility of that is decreasing as more and more bots and other types of artificial participants join the system.
A final one that I’ll mention is the issue of bias. I think there are a lot of people out there— some of them technical, some not—who think that if a machine does something or a program does something, then it must be correct. It must be the right thing because it’s just zeros and one, so how could it suffer from bias or things like that? That’s just not true, and I think at this point we have a lot of evidence to show that models, much like people, are heavily influenced by what they are taught. So I think there’s a really interesting challenge that we have in educating people to understand that AI is not always correct—computers in general aren’t always correct. We need to train technologists to understand that when you build these systems, you need to think about: who is the intended user base; who do you think the products are for; who are you not thinking that your products for—they might end up using it, too; and what are the unintended consequences? I think things like that really have to become part of the ethos of designing these systems.
MICKENS: I think that five to ten years from now, academic labs that want to have an impact on the real world will have to integrate more deeply with industry. I don’t necessarily mean they have to take industrial funding, and I don’t necessarily mean they have to only hire people from industry. But I think that we are currently in a much different world right now than we were 30 or 40 years ago where the Internet wasn’t pervasive, everyone didn’t have a supercomputer in their pocket, and it was much easier for academics to say, “oh, here’s some neat idea,” and that idea could just go into the world because there weren’t as many established types of technology. I think now, you have a lot of entrenched companies in a variety of different technological sectors, and those companies have embedded a lot of deep technical knowledge into their products. So I think that if you’re a regulator, a policymaker, a social scientist who wants to understand and improve how technology works, you have to bring in some expertise from the tech industry to fully address those problems.
MICKENS: The Applied Social Media Lab in some ways is very similar to ours. We want to bring in a set of people who care about making technology work for the public good. The key difference between RSM and ASML is that ASML mostly hires engineers. We bring people in from industry who have experience with building large scale, complicated pieces of software. We want to take that talent and then specifically have them work on technology for the public good.
MICKENS: I hope that the legacy for the Institute for Rebooting Social Media is that we served as a great example of the good that can come out when you bring a lot of folks together and then let them marinate. There’s this myth in both academia and industry about this notion of solo work being the best work—that essentially the way that you get the best out of people is to just put them in an office somewhere, give them some peace and quiet, and then they’ll churn out the next great work. That’s true sometimes, but I think in general, the best ideas come out from cross-pollination; they come out from people talking to each other. I’ll give you a concrete example of that: what should we do with TikTok, where “we” is the American government, or “we” is just society in general. There are so many interesting and diverse opinions about what to do with TikTok.
Is it good? Is it bad? How does the algorithm work? What do they owe us in terms of transparency? I think there have been a lot of really good discussions at RSM that we’ve helped facilitate to help people to understand the different facets of this problem. So I hope that is something that everyone who came to RSM took away from RSM. It’s really great to put a bunch of people in a room—virtual, real life, or otherwise—and just have them talk about some of these complex issues.