Policy

Hate Speech and Social Media: How to Govern It?

Hate Speech and Social Media: How to Govern It?

The tragic events in Christchurch, New Zealand, have revived broad debates as to the appropriateness, limits, and strategies to govern hate speech in social media. A terrorist who killed 50 people and wounded dozens was airing his act on Facebook. Millions of people witnessed the death of innocent people. Millions of others copied and disseminated the video to seed hatred and conflict across the world. The incident has raised new questions about the best ways to address the problem of hate speech in social media. The task is sensitive, tedious, and vulnerable to public criticism, mainly due to considerable variations in the analysis of hate speech and diametrically opposite views on the limitations and advantages of free speech.

It should be noted that the problem of hate speech on social media is not new. For years, social media users have been struggling to limit the scope of hate attacks and protect themselves from verbal abuse and emotional violence. Today, hate speech is an everyday reality for Internet users. According to Cawley (2018), 41 percent of American users have experience facing harassment online. The terrorist attack in Christchurch suggests that hate speech online does not have any limits. Moreover, it can take novel and covert forms that are not so easy to detect and even more difficult to counter.

Hate speech in social media generates controversial reactions, mostly due to the unquestionable commitment of the developed world to free speech. The First Amendment of the U.S. Constitution creates a foundation for openness, transparency, and free opportunities for speech for every citizen(Peters, 2018). Social media use the advantages provided by the U.S. Constitution to their fullest. They are more likely to allow hate speech than to limit it, fearing public allegations of violating the First Amendment. The boundary between free speech and hate speech is vague. According to Peters (2018), “pure hate speech is constitutionally protected.”  However, hate Speech does not merit constitutional protection when it targets a particular individual or group(s)for harm, such as a true threat of physical violence.

Deciphering this thin redline that separate protected and not protected hate speech is a herculean task. It requires constant monitoring and requires trained resources. Vetting each and every content is a monstrous task and costly. On another hand, social media as a medium do not carry any legal responsibility for the content shared by users. Therefore, any strategies to govern hate speech on social media must be targeted and economical. It should contain explicit criteria for identifying the symptoms of hate speech while ensuring that the constitutional rights of online social media users are duly protected. Such strategies may include but are not limited to user education, speech policies adopted by social media, and more decisive steps and consequences for any acts of hate speech online.

Effective communication, tolerance, and respect for diversity begin with education. This is one of the most promising strategies to combat hate speech online, which is increasingly difficult to implement. Most schools and higher education institutions have ethics classes for their students. However, even excellence in ethics studies does not automatically translate into tolerance and mutual respect in social media. Anonymity is the key barrier to governing hate speech online. Users who cannot afford to misbehave at home, in the workplace or at school believe they can blurt their hate online, without being detected and consequences for their acts. As such, education, and social media education in particular, should be thoroughly integrated into school and university programs. Nevertheless, steps should be taken to govern social media and ensure greater transparency of online interactions. It is also important to counter hate speech before it leads to abuse and crime.

It is time for social media to adopt speech and communication policies. The latest data indicate that most social media manage to remove 75 percent of hate speech within the first 24 hours after it appears online (Beswick, 2019). The leading social media platforms like Facebook and Twitter have signed the so-called “Brussels code of conduct” (Beswick, 2019). According to this Code, they assume responsibility for removing flagged content within the first 24 hours. It is an example of a collective strategy to tackle the problem of hate speech online. Social media are in a position to regulate the availability of hate content provided by users. Of course, mistakes do happen. At times, social media confuse appropriate content for hate speech; at other times, hate speech wanders across social media platforms without being detected and addressed. However, steps like these reduce the existing tensions and give hope to those who have tragic experience of hatred and assault online.

The more action is taken to detect actionable hate speech in social media; proportional consequences must be administered. The consequence should be legal, based on credible evidence, and proportionate to the extent of the actionable hate speech. Users should be held accountable for what they post online. Likewise, social media platforms should be responsible for the way they handle hate speech. It is a form of collective responsibility, which has nothing to do with censorship. In the age of freedom and global communication, there should be a way to balance free speech and protect the society from actionable hate speech online. We have all been passive in our past reactions. It is time to become more decisive and reduce the scope of hate speech online.

All in all, governing actionable hate speech on social media is possible and necessary. However, only collective action involving social media, users, and authorities can have any positive effect on communication and interactions online. Strict user policies, social media ethics education, and consequences for actionable hate speech in social media should provide an impetus for changing the social media landscape and making it more tolerant of diversity and self-expression.

Ed.’s Note: Samuel Alemu, Esq is a partner at the ILBSG, LLP. He is a graduate of Harvard Law School, University of Wisconsin-Madison Law School, and Addis Ababa University. Samuel has been admitted to the bar associations of New York State, United States Tax Court, and the United States Court of International Trade. The writer can be reached at salemu@gmail.com.Samuel’s twitter handle is @salemu

Contributed by Samuel Alemu, Esq.

Note: released first on Reporter English

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button