Two Fundamental Concerns about Social Media

The bulk of my concerns about social media can be broken into two fundamental areas: its psychological impacts on us and the centralization of our data. Almost everything else (and most of what you’ll read about on this blog) are offshoots of these fundamental concerns. This is a brief summary of both of them.

Psychological Impacts

There are a lot of questions about what social media is doing to our brains, which researchers are only just beginning to explore. Is it behind the increase in teen depression and suicide rates? Does it lead to increased bullying? Does it lead to a decreased sense of self worth? Are we addicted to it? If so, why are we addicted to it? Is there an evolutionary component to our behavior on social media?

I think the answer to all of these questions is that we really don’t know yet. Researchers have established links between unhappiness and social media use, but we really don’t understand the extent to which social media impacts us. We are also still in early days of social media, so there is no way to understand the full extent of long-term psychological impacts of  social media.

Centralization of our data

This might seem like a two-parter, but it’s really two sides of the same coin. Due to network effects, the world of social media has always been destined to consolidate to a few platforms. These are Facebook, Instagram, Whatsapp, Twitter, and Snapchat – at least in the US. Due to the concentration of where we congregate online, these firms now have outsized power, and are essentially monopolies. When a firm is a monopoly, it doesn’t have an incentive to fight back against abusive trolls or fake news.

It is also in the position to collect vast amount of data on its users, which is can then sell to the highest bidder. When we learn that a company is abusing our data, it might create a brief PR storm, but that will soon blow over. The truth is that social media users do not have the ability to choose an alternative provider who does not abuse their data.

Steven Johnson wrote an excellent piece in the New York Times in early 2018 about blockchain and the benefits of a decentralized internet. The (long) article is well worth a read. It speaks to the benefits of decentralization, verification, open protocols and the lack of an “owner”. Why bring up an article on blockchain in a blog about social media? Look at this quote:

The true believers behind blockchain platforms like Ethereum argue that a network of distributed trust is one of those advances in software architecture that will prove, in the long run, to have historic significance.

The Bitcoin bubble has become a distraction from the true significance of the blockchain. If we look past the speculative bubble, we can see the potential of a “network of distributed trust”. This could lead to a decentralized and democratized version of the internet we know today. This would prevent our identities from being housed in Facebook or Google’s walled gardens. We could have an identity that exists based on open protocols, and we can take it from platform to platform as we please. There would be penalties for abusing our data.

It is helpful to understand the two layers of the internet. The first is based on open protocols that were developed in the 1970s, which still exist today – email and web browsing still works on these. This layer is decentralized. The second layer of the internet are the platforms that we use to access the internet today. These are private companies such as Facebook or Twitter. This layer is private and highly centralized.

Johnson argues that keeping smartphones away from kids and government regulation are commendable, but will not cure all of societies ills. This belief – that there is no silver bullet to protect us from social media – is a fundamental reason for the existence of this blog.

Johnson paints a vision for a decentralized future with open protocols overtaking the highly lucrative private platforms that exist today. Blockchain, afterall, has shown us that it is possible for everyone to agree on the contents of a database without the database having an “owner”.

I hope he is right. But his vision requires the success of swashbuckling punk rockers that are driven purely by a mission to restore the internet to its original utopian vision. This would require everyone to turn a blind eye to the gobs of money that will be thrown at him to keep the internet closed.

Until then, the social media platforms we use will remain closed and highly centralized. Our data lives with these private companies, which earn their revenue by harvesting and selling our data. While some are perfectly comfortable with the sale of our personal data by fortune 500 companies, it is important to remember that these private companies are vulnerable to attacks, and our personal information can get into shadier hands.

This all leaves me thinking about a very prescient tweet that I saw once, which I would love to attribute to its author, but cannot remember:

The only businesses that refer to their customers as “users” are tech companies and drug dealers

This is what drives this blog. There are many questions about what social media is doing to us. Most of the efforts to understand social media have to do with monetizing our attention. How can businesses advertise to users as they spend time on these platforms? And how can the social media platforms monetize the time we spend using their technology? It’s fine that many smart people are dedicated to answering these questions – because if they weren’t, bad actors would fill that void.

However, the conversations about what social media does to our brains seems limited to fragmented academic studies. Discussions on social media’s centralization and its impact on society have become more mainstream recently, but do not seem to have impacted the fundamentals of how social media platforms operate.

Those of us that seek to understand what is happening to us as individuals and society as a whole will never come to a satisfying conclusion. We seem destined to only uncover more questions. Questions I am happy to continue asking.

 

Social Media / Trolls / Free Speech

The bulk of social media companies were founded and are headquarted in the United States. In the US, we enjoy and believe strongly in our freedom of speech, and go to great lengths to protect it. It is not surprising then, that freedom of speech is a topic that is raised frequently in discussions about how companies should police their social networks when it comes to trolls.

Trolls are an issue across pretty much every social media platform. Wikipedia defines the term internet troll as “Someone who posts inflammatory, extraneous, or off-topic messages in an online community, such as a forum, chat room, or blog, with the primary intent of provoking readers into an emotional response or of otherwise disrupting normal on-topic discussion.” While this definition makes internet trolls seem like a mere distraction, trolling can take on many forms, often devolving into misogynistic, homophobic, racist, and otherwise ethnocentric harassment. It is nearly universally accepted that internet trolls are bad, but approaches to stamp them out are varying.

For instance, Facebook and Instragram take a more aggressive approach than Twitter or Reddit. Victims of harassment on the two former platforms might argue to the contrary, but Facebook does at least require you to use your real identity. On the other side, Twitter, and of course the wild west of the internet – Reddit, willfully allow users to exist in a cloak of anonymity.

Reddit and Twitter have long viewed themselves as bastions of free speech. Free speech is an important right, but as we all learned in school, we cannot yell “fire” in a crowded theater. Freedom of speech has its limits. When it comes to speech on Reddit, pretty much anything goes. Reddit is compiled of groups called subreddits. The subreddits are monitored by “redditors”, who are just users of Reddit. The term “monitor” is applied very loosely, as they merely upvote or downvote a comment, to determine how much visibility it gets. Therefore, a subreddit filled with hateful redditors frequently has hateful comments bubbling up to the top, going viral. This all happens in plain sight with no intervention from the powers that be within Reddit.

Some will argue that trolls have always existed, and the trolls on social media platforms are nothing to worry about. But in earlier times, trolls had to show their faces and use their actual voices to harass someone. This type of behavior invited well-deserved shame and obviously didn’t scale well. We now live in a world where an army of Twitter eggs (users with no avatar) can say whatever they want to anyone they want with no repercussions because they are free to exist as anonymous users on the internet. Twitter can suspend an individual account, but how hard is it to create a new one with a different email?

Twitter deserves credit for at least trying to take on the trolls. It recently revised its  approach to taking on trolls, citing a 4% decrease in reports of abuse. It’s new approach referred to many as “out of sight out of mind”, decreases the visibility of tweets from users who display behavior consistent with that of trolls. Such behavior could include signing up for multiple accounts at once or repeatedly tagging users that don’t follow them back. As I’ve argued before, there is no silver bullet to many of the problems that have accompanied the rise of social media. It’s great they are trying a new tactic, from which we will likely learn more about policing trolls. However, this is not going to end trolling on Twitter. If Jack Dorsey and his team at Twitter are truly dedicated to furthering free speech, and I believe they are, they would be wise to stay vigilant in their pursuit against trolls.

Good intentions aside, the success of this effort really hinges on the business aspect of it. If Twitter believes, and its investors agree, that curtailing trolls is good for business, then there is hope. The problem is that it would be difficult for Twitter to effectively police trolls on its platform without impacting the free speech of all of its users. Difficult, but not impossible. More accurately, it would be expensive.

The issue is that an algorithm can’t solve it all – which is often the first, second and third approach by most Silicon Valley companies. Think back to school. The  school had rules about how to behave and language we could use. But in the cafeteria, it relied on monitors – real people – to ensure that we adhered to those rules. This same approach would be required by Twitter and other social media companies to rid their sites of trolls.

The problem with real people is that they are expensive. Deploying them en masse to stamp out trolls is not conducive to the kind of margins enjoyed by large tech firms and demanded by their investors. Policing the trolls would need to show that it not only has an impact on abuse, but that it also results in higher revenues for the company. If Twitter sees that fewer trolls leads to more users and greater engagement, they will conclude that trolls are bad for business. If they conclude that the presence of trolls is not keeping users away, policing the trolls will only result in greater costs, and have a negative impact on Twitter’s bottom line. Until the link between trolls and a company’s bottom line is established, attempts to stamp out the trolls will be mere PR fodder.