Recently, Mark Zuckerberg told the supreme court that within the next 10 years that Facebook will have Artificial Inteligence capable of recognizing and removing hate speech. It, like many similar claims, is no doubt rooted in good intentions. But ultimately the general public’s relationship to the concept of hate speech is somewhat unhealthy and unproductive. Incidentally, a good place to see the disparity between how people between the ages of 18-25 view the term and how people in the 60 plus age range are on Facebook itself. Suffice to say, the people in the latter category likely weren’t the ones pushing for this AI. But regardless, it is worth taking a look into what the implications of further censorship of ideas, even objectionable ones, can mean for the users of Facebook and in a much broader scope what it could mean for how people communicate in general.
For starters, it is important to consider what is considered hate speech. The pitch by Zuckerberg as well as the general rational put forward by people is that they want neo-nazi sentiments removed as well as removing overtly abusive posts from certain comment threads. These are unquestionably good things to do. But, as is the case with almost anything on the internet, there arises an issue with people extrapolating. For some, hate speech is anything and everything that differs from their own beliefs. For evidence of this, one need only look at the difference between searching “hate speech” on google or searching it on Tumbler. Herein lies the problem: while there are certainly some cases in which the overwhelming majority of society will agree that a certain viewpoint is objectionable, any attempt to implement a one-size-fits-all system to enforce that ultimately leads to the most vocal and most extreme people to take the helm.
This is why the implications of implementing such a AI program need to be taken into account. On a fundamental level, the program allows someone to serve as a judge of what can and cannot be said. While this article is by no means intended to be a conspiracy theory, it should be mentioned that this is the same tactic used by almost every autocratic dictator to take away the ability to speak out against them and is usually employed as a means of creating the illusion of a unanimous approval as well as making people extremely hesitant to voice any belief outside of the norm. Or, alternatively, an attempt to sanitize what is being presented to people will result in the vast majority of people affected negatively by it simply going elsewhere.
In short, the concept of doing anything, no matter how well intentioned, that would affect people’s ability to express their ideas can have massive adverse consequences both on and off the internet. It may feel wrong to say that people with absolutely deplorable views should be given a platform to express them, or allow people to be cruel or abusive online, but tolerating people like that may be the price of preventing the ability to have an opinion on things in the first place. In the words of Voltaire, “I may not agree with what you have to say, but I will defend to the death your right to say it.