Implausipod
Art, Technology, Gaming, and PopCulture
Implausipod
E0032 Baked In: Social Media and Tech Determinism
How much of your experience online is dictated by the environment you're in, and how it was constructed? What is you rebuild Twitter, and it still ends up being toxic? Did you fail, or succeed without knowing it?
These are the kinds of questions that arise when we look at technology from a deterministic point of view: that technology is the driver of cultural and social change and growth. And while this ideology has its adherents, many of the assumptions about technology, and tech determinism are already Baked In to the way we deal with tech in the world.
Listen on as we explore what this is all about...
Sign up for the Newsletter here: https://implausi.blog/newsletter/
Bibliography:
Chee, F. Y., Mukherjee, S., Chee, F. Y., & Mukherjee, S. (2023, November 10). Musk’s X has a fraction of rivals’ content moderators, EU says. Reuters. https://www.reuters.com/technology/musks-x-has-fraction-rivals-content-moderators-eu-says-2023-11-10/
Drolsbach, C., & Pröllochs, N. (2023). Content Moderation on Social Media in the EU: Insights From the DSA Transparency Database (arXiv:2312.04431). arXiv. http://arxiv.org/abs/2312.04431
FediForum.org. (n.d.). FediForum | Happiness in the Fediverse. Retrieved May 26, 2024, from https://fediforum.org/2024-03/session/4-d/
Green Blackboards (And Other Anomalies)—Penny Arcade. (n.d.). Retrieved May 19, 2024, from https://www.penny-arcade.com/comic/2004/03/19/green-blackboards-and-other-anomalies
Oldemburgo de Mello, V., Cheung, F., & Inzlicht, M. (2024). Twitter (X) use predicts substantial changes in well-being, polarization, sense of belonging, and outrage. Communications Psychology, 2(1), 1–11. https://doi.org/10.1038/s44271-024-00062-z
Rheingold, H. (2000). The Virtual Community: Homesteading on the electronic frontier. MIT Press.
What if you rebuilt Twitter from the ground up, and it ends up being as toxic as the old one? Did you do something wrong, or were you just wildly successful? That's the question we're trying to address in this week's episode, but perhaps we need to approach this from a different angle. So let me ask you, when you visit a website online, or use an app on your phone.
How does it make you feel? Do you feel happy? Amused? Upset? Angry? Enraged? And did it always feel that way? Did it used to feel good and then perhaps it took a turn for the worse? It became a little bit more negative? If it doesn't make you feel good, why do you keep going back? Or perhaps you don't, perhaps you move on to someplace new, and for the first little while it's cool, it feels a lot like the old place used to be, but you know, before things changed, before other people came along, or before the conversation took a turn for the worse.
But the question is: How long before this place starts going downhill too, before the same old tired arguments and flame wars that seem to follow you around through the years and decades keep catching up to you? I mean, maybe it's you, there's always a chance, but let's take a moment and assume we're not slipping into solipsism here, as this seems to be a much more widely reported experience, and ask ourselves if maybe, just maybe, that negativity that we experience on the internet is something endemic.
It's part of the culture, it's baked in.
Welcome to The ImplausiPod, an academic podcast about the intersection of art, technology and popular culture. I'm your host, Dr. Implausible. And in this episode, we're going to address the question of how much of your experience online is shaped by the environment you're in and how it is constructed.
Because there is no such thing as a natural online environment, all of these things are constructed at some point, but it's a question of what they're constructed for. We know that social media spaces can often be constructed for engagement, which is why it lends itself to rage farming and trolling. But how far back does it go?
We know we see commonalities in everything from Facebook and Twitter, to YouTube comment sections, to web forums, to Usenet, to email. Are these commonalities that we see related to the technology? Is there an element of what's called technological determinism at play? Or are the commonalities that we see just related to the way that humans communicate, especially in an asynchronous environment like we see online?
Hmm. Or perhaps it's something cultural. It's part of the practice of using these tools online. And as such, it gets shared and handed down, moves from platform to platform to platform, which is what we seem to see. Now it could be a combination of all of these things, and in order to tease that out, we're going to have to take a look at these various platforms.
So I'll start with the one that was the genesis for this question for me. Mastodon, which is part of the ActivityPub protocol. Mastodon in many ways replicates the functionality of Twitter along with the look and feel with toots replicating the tweets, the short microblog posts that may include links or hashtags, an image or short video clips.
And depending on the client you're using to access it, you'd hardly notice the difference. It's this similarity that led me to the question that started off the show. What if you rebuild Twitter and it still ended up being toxic? So in order to explore this question, we're going to take a quick survey of the field and look at the problems that can be seen in a lot of different social media platforms.
Then we'll go into more depth on the potential causes that we mentioned, including the technology, the nature of communication online, as well as Cultural factors, and then conclude by seeing if there might be a more hopeful or optimistic way that we can approach this and our online interactions.
So when we look at these online platforms, you might want to see how they're all just a little bit broken while we're overwhelmingly a positive podcast here, and we try and accentuate the positive elements that exist in our society. I'll admit. Sometimes it's a little bit hard, and when we start looking at online platforms, we can see that much like families, each dysfunctional one is dysfunctional in its own ways.
However, that being said, we might be able to tease out a few trends by the end of this. Our baseline for all of this is, of course, going to be Twitter. Whether you call it X or Twitter, it's been one of the most studied of the social media platforms, and that gives us a wealth of data. And it also allows us to make a clear distinction by calling it Twitter prior to the acquisition by Elon Musk and But regardless of whether we look at Twitter or X, the results aren't great.
In a recent study of the University of Toronto by Victoria Olemburgo De Mello, Felix Cheung, and Michael Inzlicht, the authors find that there's no positive effects on user well being by engaging with X. Even the occasionally touted greater sense of belonging by participating in the platform didn't lead to any long-lasting effects.
Instead, what they found was an immediate drop in positive emotions, so things like joy and happiness are right out the window, and there was an increase in outrage, political polarization, and boredom. So using X, even if you're a little bit bored, is probably a net negative. And this is just from a recent study.
It isn't counting the systemic changes that have taken place on the platform since the acquisition by Elon Musk, and the platforming of hate speech, and the reduction of moderator tools, the increasing attack vectors by removing the ability to block harassers, and all the other changes that have taken place as well, including creators just upright and leaving the platform.
But that's the state of things right now. The question is, Did Twitter always suck? And the answer is kind of yeah. The University of Toronto study we mentioned was collecting data back in 2021 prior to the acquisition by Elon Musk, and so if things have gone downhill since then for the reported outrage and lack of joy, then I can't really imagine what the place is like now.
But enough about the service formerly known as Twitter. When looking at some of its competitors, what are their downsides? Are they as toxic too? There's Threads, the Facebook owned offshoot of the Instagram platform, primarily focused on text-based messaging. Even though it launched in July of 2023, it came together rather quickly, seemingly as an attempt to capitalize on the struggles that Twitter was having, struggles that soon led to it being rebranded as X later that month.
One of the challenges with threads is they're adding features as they go, and while they leverage their existing user base from Instagram, it hasn't led to the same level of active retention that one might think. Despite the lack of explicit advertising, they still have issues with spam posts, for example.
And then there's the whole challenge with Facebook ownership in general, which we've discussed on in previous episodes, like when we talked about Triple E back in episode 15. BlueSky, or B-Sky, was another Twitter alternative built on the prospect of having an open source social media standard, and up until May 5th of 2024, it had Jack Dorsey, a former Twitter CEO, on its board.
His departure is indicative of some of the challenges that lay there, that it's somewhat lifeless with minimal community involvement, and that despite it being built as a decentralized platform, until that gets rolled out, it very much is a centralized form of control. Usenet, the almost OG social network, built off of the Network News Transfer Protocol, or NNTP, that we talked about a lot back in episode 10, still exists, technically, but on the text-based servers it's mostly dead with tons of spam and minimal community, though there are a few diehards that try and keep it going.
The existence of the binaries groups there as a file transfer service is a completely separate issue far beyond what we're talking about here. LinkedIn, the social network for business professionals, feels incredibly inauthentic and performative, and it feels like the functionality that you find there would be better served by being on almost any other social media platform.
Reddit, with all the pains that it had in 2023 with its shift to the IPO and the strike of the various moderators, is still a going concern with high user counts, but a lot of that content may be now fed into various AI platforms, turning conversations into just so much grist for the mill. Stack Overflow, the tech-based Q& A site, has done much the same thing, turning all that conversation into just so much AI fodder.
Platforms like Discord, which have, again, corporate control, and may lead to all the content they're in being memory old. And that brings us back to Mastodon, which, despite all the promises of an open social web, can have, in certain places, an incredible toxic community. It'll have Federation Wars, as various servers join or disband, based on.
Ideological differences with other active servers, there's access problems for a number of different users, there's differing policies from server to server, and there's inconsistent moderation across all of it. And despite all these problems, it might be one of the best options when it comes to text based social media.
So this brings us back to our main question, why do they all suck? Is it something that's baked in? Is it something that's determined by the technology?
So let's take a moment and introduce you to the idea of technological determinism. Tech determinism is a long running theory that's existed in some form or other since the 19th century. Technological determinism posits that the key driver of human history and society has been technology in its various forms.
It leads to a belief that innovation should be pursued, sometimes at all costs, and that the solution to any issue is more technology, even if those issues are caused by other technologies in the first place. Tech Determinism exists on a bit of a spectrum, where its adherence can be more or less hardcore with respect to how much technology determines our history and how much attention is paid to any explanation outside the scope of technology.
According to technological determinism, all social progress follows tech innovation, and there's a certain inevitability that's part and parcel with that. If I was able to license music for this show, I'd queue up You Can't Stop Progress by Clutch off their 2007 album From Beale Street to Oblivion. But, uh, in this case I'll just ask you to go to YouTube or your other music streaming site, or grab your CD off the shelf and put it in and play along.
But back to our spectrum. Hardcore technological determinists don't think society or culture can have any impact on technology, or at least the direction of it. And that goes back to that inevitability that we were talking about. There's a softer form of technological determinism as well, where the technology can be dependent on social context and how it is adopted.
And this ties back to what Penelope Quan Haas talks about as social determinism. Social norms, attitudes, cultural practices, and religious beliefs are perceived as directly impacting how technology is used and what its social consequences are. This is a little bit more of a nuanced view and takes us away from the instrumental view where technology is seen as neutral and just a tool to be used.
But as pointed out by Langdon Winner back in 1980 in a rather famous article, Do Artifacts Have Politics?, that neutrality is something that's very much circumscribed. The design of a tool can have very specific impacts about how it is used in society. And I think this starts bringing us back to those design spaces that we're talking about, those online platforms.
Each of them present themselves in various ways and suggest various actions that might be taken. done. These are what Don Norman calls affordances or the perceived action possibilities of a certain piece of technology. When it comes to online spaces, it doesn't matter whether that space is presented to the user on a smartphone or on a desktop computer, laptop, or some kind of terminal, the preferred form of action is going to be presented to the user in the most accessible place to reach.
This is why you'll see the swipe or like or comment buttons presented where they are. On a smartphone, that's anything that's in easy reach of the thumb of a right-handed user. For X, it's that little blue button in the right-hand corner, just begging you to use it. And by reducing the barrier to entry to posting, you get a lot of people posting really quickly.
Emotionally, reacting to things, getting the word out there. Because, heaven forbid, somebody is wrong on the internet. And this leads us to the second factor that may be leading to such horrible online communication. The very nature of online communication itself. And this has been recognized for a long, long time.
At least 20 years. On March 19th, 2004, in a post titled “Green Blackboards and Other Anomalies”, the world was introduced to the GIFT theory. And we'll call it the GIFT theory because we're on the family friendly side of the podcast sphere. As Tycho from Penny Arcade explained at the time, a normal person plus anonymity and an audience equals a GIFT.
And because that anonymity was kind of part and parcel with online interactions that you really didn't know who you were dealing with. And that all identities online were constructed to a degree, it might lend people to say things online or behave online in ways that they wouldn't if they were face to face with the person.
And because having an audience can allow for someone to get a larger reaction, people might be more predisposed to behave that way, if they thought their words could be traced back to them. Now, this is 2004, so pre social media. Twitter and Facebook would take off after that. And it became slightly more common for people to post using their real names, or at least a slightly more recognizable one.
And we found out that that really didn't change things at all. So perhaps it has more to do with the audience rather than the anonymity. Regardless, the culture that had developed through early Usenet and then AOL chat rooms, through to online gaming, instant messenger apps, and IRC, kept encountering the same problems.
Which the tech determinants would take as a sign that suggests that the technology is the cause. But what if the social determinists are right? Social determinists being the flip side of the tech determinists, that all interactions that take place are due to social cues. This leads us to our third potential cause.
What if it's the culture of online interaction? In 1993, Howard Rheingold published one of the first books on online societies, The Virtual Community, subtitled Homesteading on the Electronic Frontier. This is based on his experience as a user in The Well, the Whole Earth Electronic Link, a BBS based in San Francisco run by computer enthusiasts that were part of the Whole Earth catalog.
Following up on his previous books on hackers and virtual reality, he wrote a book that took a wide-ranging survey of the state of the web in 1993. Or at least, what we now call the web, as much as the book focused on BBSs and other portals like The Well, terminal systems like Francis Minitel, commercial services like CompuServe, and email, all under the umbrella of CMC, Computer Mediated Communication.
Though this acronym is now largely forgotten, save for in certain academic circles, it bears repeating and reintroduction to those unfamiliar to the term, as it explains in the distinction it makes. And, open parenthesis, not that I'm saying that a term is acting with intentionality here, I'm not that far down the memetic rabbit hole, but rather that we can consider it as the focus for our agentive discussion. Close parenthesis.
Rheingold was looking at early implementations of the web. Cross cultural implementations, when there are largely local phenomena, national at best, and rarely the international level that we now expect. You looked at France's Minitel at CalvaCom, as well as sites in Japan and the well on the west coast of the United States.
Yes, they could all be accessed outside of that, but long distance was costly and bandwidth was low. And time and again, the same phenomena was observed. Talking with Lionel Lombroso, a participant with CalvaCom in France, about his experiences with 80s, one of the biggest challenges was dealing with like the perpetual flame wars, in this case one involving Microsoft and the evils therein.
Lombroso goes on to state that, quote, I think online is a stage for some people who don't have opportunities to express themselves in real life. Again, this is the late 80s, early 90s. HTTP is just being invented around the same time. The web as we know it doesn't exist yet, but online communication, computer mediated communication, does.
And they're seeing this already. Where arguments based on politics or ideology lead to intractable discussions, which invariably force decisions to be made between censorship and free expression, and attempts to limit the flame war will invariably shift to this regardless of the forum, as has been seen in the Well, Twix, Calva, and so many other sites as well.
So, if antagonism online goes back this far, if we can see the roots of the quote unquote Seven Deadly Sins Then perhaps we're close to finding our answer. Antagonism online can largely be a cultural thing. And just as a parenthesis, ask me sometime about those seven deadly sins and I can tell you how you can tell if you're stuck in a 7g network.
If online toxicity is well and truly baked in, being part and parcel of the culture from the very beginning, is there a way to fight back against it? One of the biggest problems is the expectations of use. People coming to Mastodon, for instance, which looks and feels a lot like Twitter in many ways, is a lot of the initial participants are coming directly from Twitter and bringing all their old habits and patterns with them, for good.
The tech is static, but the new tech looks like the old tech and provides the affordances of the old tech, so it gets used in similar ways by people who expect it to behave in a certain way. And they may not be entirely conscious of that. That, much like Taylor Swift sings, It's me, me, I'm the problem, it's me.
So how might this be combated? There's a number of options, and they're not mutually exclusive. The first is to change the interface in order to change the interaction. This may be productive, as it would shake the users out of assumed patterns of use. However, it's double edged, as one of the elements that makes a new platform attractive is its similarity to other existing platforms.
And to be clear, Despite the similarity of interface, tools like Mastodon are still facing an uphill battle in attracting or retaining users that are leaving X and or Twitter. And I'm saying and or, that despite it being X, we're talking historically over the entire period that, say, tools like Mastodon have existed.
The second option can be heavier moderation. And this can be one of the big challenges for the Fediverse, which largely operates under donations and volunteer work. This approach has been taken by some private entities and the DSA in the EU, that's the Digital Service Act, has required large social media platforms to disclose the number of moderators they have, especially in each language.
And in articles on Reuters and Global Witness published in November and December of 2023, we got a look at what some of those numbers were. For example, X had 2, 294 EU content moderators, compared with 16, 974 for YouTube, 7, 319 at Google's Play service, and another 6, 125 at TikTok. And those numbers are largely for the English moderators.
The numbers drop off rapidly for non-English languages, even in the EU. And if large multinational corporations are challenged by and struggling with the lack the ability to moderate online, the largely volunteer versions that exist in the Fediverse can have even less recourse.
So a third solution may be education on social norms and online toxicity. In this, networks like the Fediverse have some advantages, as they've been able to put in tools to assist users and creators that can modify the content in certain ways. Content warnings, which can hide certain content by default. Alt text for image and media descriptions for persons that need to use screen readers, using camel case for hashtags in order to increase readability.
But all of this is a long and constant battle as it's on the user to institute them when they're using it. And we've seen earlier forms of this happen online. As recounted in the Eternal September, and you can check out our old episode on that. But, as the name implies, it keeps happening as platforms need to acculturate the influx of new users in order to use the platform successfully.
And, as those new users still have all the same expectations of use that they've picked up in every interaction online that they've had up to that point in time. It's still going to be a sticking point. So maybe we have to put it on the user, which leads us to our fourth option that the user needs to be the change that they want to see.
And I can see reflections of this in my own online interactions, that I realized maybe I wasn't the best online citizen in the past, but, you know, we can all reflect about how we interact online and try and do better in the future. One simple method would be to follow George Costanza's lead. And I'm serious on this, George Costanza in season 5 episode 22 on Seinfeld, this was the show called The Opposite, and Costanza tries doing the opposite of his instinct for every choice and interaction he has online, and his life ends up improving because of that.
He realizes that, hey, much like Taylor Swift, he might be the problem. And he tries to do better and make conscious decisions about how he's interacting with people online. I don't know if that's something you can implement in software, but there are methods, like notifications that pop up when somebody's going to reply to somebody they've never interacted with before.
Or, for instance, notifications for users when they're going to post something online, letting them know that, hey, this is being distributed to a mass audience and not to your 12 closest friends. The other option for trying to be the change you want to see, you would just be actively working to try and make the internet a better place.
And we can see this in things like the happiness project on March 20th, 2024, the second day of the third FediForum, an unconference where individuals can come together online to discuss things related to the Fediverse, the ActivityPub protocol, Mastodon and other ActivityPub tools. Evan Prodromou, a co-author of ActivityPub convened a panel on happiness in the Fediverse, and the discussion centered around what makes us happy when we engage online.
How do we build those strong social ties and positive engagement that we'd love to see in our own lives? How do we ensure that our social networks lead to positive mental and physical health and well being? positive mindset overall? Those are not easy questions, by all means. One of the things the participants noted is that happiness requires active work, in that posting positive things requires an act on the part of the creators there, and it's not always easy.
There can be a number of very stressful things that are inherent in social media, and especially the ways we use them now. As I participated in the panel, I mentioned some of the things that have brought up previously both in this episode and in previous ones, letting them know that we may need to be much like George Costanza and try and do the opposite.
But also I left the panel with a question that I began this episode, how much of your experience online? is dictated by the environment you're in and how it's constructed, that we need to consider both the architecture and the practices. And perhaps this is ultimately the solution. We create community by building a better place, supplemented by the technology, but created through the culture and patterns of use.
It has to be explicit though, as good interactions may go unnoted. And those who are unaware of them, or those who are new, may not notice that things are done differently. Ultimately, all these things can be incredibly positive for community. However, what happens when your community is taken away from you?
We'll look at that possibility in the next episode of the ImplausiPod.
Once again, thank you for joining us on the ImplausiPod. I'm your host, Dr. Implausible. You can reach me at Dr. Implausible at implausipod. com, and you can also find the show archives and transcripts of all our previous shows at implausipod. com as well. I'm responsible for all elements of the show, including research, writing, mixing, mastering, and music, and the show is licensed under a Creative Commons 4.0 share alike license.
You may have noticed at the beginning of the show that we describe the show as an academic podcast, and you should be able to find us on the Academic Podcast Network when that gets updated. You may have also noted that there was no advertising during the program, and there's no cost associated with the show, but it does grow through the word of mouth of the community, so if you enjoy the show, please share it with a friend or two and pass it along.
There's also a buy me a coffee link on each show at implausipod.com, which would go to any hosting costs associated with the show. Over on the blog, we've started up a monthly newsletter. There will likely be some overlap with future podcast episodes and newsletter subscribers can get a hint of what's to come ahead of time.
So consider signing up and I'll leave a link in the show notes. Coming soon, we'll be following up on this episode with what happens with the loss of online community. In an episode titled, Tick Tock Tribulations. After which we'll have some special guests join, for a two-part discussion of the first season of the Fallout TV series, followed by a look at the emergence of the dial up pastorale, and then the commodification of curation. I think those episodes will be fantastic, I can't wait to share them with you. Until then, take care, and have fun.