The bulk of social media companies were founded and are headquarted in the United States. In the US, we enjoy and believe strongly in our freedom of speech, and go to great lengths to protect it. It is not surprising then, that freedom of speech is a topic that is raised frequently in discussions about how companies should police their social networks when it comes to trolls.
Trolls are an issue across pretty much every social media platform. Wikipedia defines the term internet troll as “Someone who posts inflammatory, extraneous, or off-topic messages in an online community, such as a forum, chat room, or blog, with the primary intent of provoking readers into an emotional response or of otherwise disrupting normal on-topic discussion.” While this definition makes internet trolls seem like a mere distraction, trolling can take on many forms, often devolving into misogynistic, homophobic, racist, and otherwise ethnocentric harassment. It is nearly universally accepted that internet trolls are bad, but approaches to stamp them out are varying.
For instance, Facebook and Instragram take a more aggressive approach than Twitter or Reddit. Victims of harassment on the two former platforms might argue to the contrary, but Facebook does at least require you to use your real identity. On the other side, Twitter, and of course the wild west of the internet – Reddit, willfully allow users to exist in a cloak of anonymity.
Reddit and Twitter have long viewed themselves as bastions of free speech. Free speech is an important right, but as we all learned in school, we cannot yell “fire” in a crowded theater. Freedom of speech has its limits. When it comes to speech on Reddit, pretty much anything goes. Reddit is compiled of groups called subreddits. The subreddits are monitored by “redditors”, who are just users of Reddit. The term “monitor” is applied very loosely, as they merely upvote or downvote a comment, to determine how much visibility it gets. Therefore, a subreddit filled with hateful redditors frequently has hateful comments bubbling up to the top, going viral. This all happens in plain sight with no intervention from the powers that be within Reddit.
Some will argue that trolls have always existed, and the trolls on social media platforms are nothing to worry about. But in earlier times, trolls had to show their faces and use their actual voices to harass someone. This type of behavior invited well-deserved shame and obviously didn’t scale well. We now live in a world where an army of Twitter eggs (users with no avatar) can say whatever they want to anyone they want with no repercussions because they are free to exist as anonymous users on the internet. Twitter can suspend an individual account, but how hard is it to create a new one with a different email?
Twitter deserves credit for at least trying to take on the trolls. It recently revised its approach to taking on trolls, citing a 4% decrease in reports of abuse. It’s new approach referred to many as “out of sight out of mind”, decreases the visibility of tweets from users who display behavior consistent with that of trolls. Such behavior could include signing up for multiple accounts at once or repeatedly tagging users that don’t follow them back. As I’ve argued before, there is no silver bullet to many of the problems that have accompanied the rise of social media. It’s great they are trying a new tactic, from which we will likely learn more about policing trolls. However, this is not going to end trolling on Twitter. If Jack Dorsey and his team at Twitter are truly dedicated to furthering free speech, and I believe they are, they would be wise to stay vigilant in their pursuit against trolls.
Good intentions aside, the success of this effort really hinges on the business aspect of it. If Twitter believes, and its investors agree, that curtailing trolls is good for business, then there is hope. The problem is that it would be difficult for Twitter to effectively police trolls on its platform without impacting the free speech of all of its users. Difficult, but not impossible. More accurately, it would be expensive.
The issue is that an algorithm can’t solve it all – which is often the first, second and third approach by most Silicon Valley companies. Think back to school. The school had rules about how to behave and language we could use. But in the cafeteria, it relied on monitors – real people – to ensure that we adhered to those rules. This same approach would be required by Twitter and other social media companies to rid their sites of trolls.
The problem with real people is that they are expensive. Deploying them en masse to stamp out trolls is not conducive to the kind of margins enjoyed by large tech firms and demanded by their investors. Policing the trolls would need to show that it not only has an impact on abuse, but that it also results in higher revenues for the company. If Twitter sees that fewer trolls leads to more users and greater engagement, they will conclude that trolls are bad for business. If they conclude that the presence of trolls is not keeping users away, policing the trolls will only result in greater costs, and have a negative impact on Twitter’s bottom line. Until the link between trolls and a company’s bottom line is established, attempts to stamp out the trolls will be mere PR fodder.