Hate Speech Online: 2016 in Review
By Susan Corke, Erika Asgeirsson, and Dora Illei
Twitter trolls and hateful anonymous comments online are not a new phenomenon, but the 2016 presidential election brought online hate speech into the spotlight and—for now—seems to be holding it there. Social media facilitates the rapid spread of ideas online, and hate speech is no exception.
The Anti-Defamation League published a report that found that during the U.S. presidential election, from August 1, 2015 through July 31, 2016, there were over 2.6 million tweets "containing language frequently found in anti-Semitic speech." Additionally, at least 800 journalists received antisemitic tweets. Of those 800 journalists, the top 10 most targeted individuals (all of whom are Jewish) "received 83 percent of these anti-Semitic tweets."
Anti-Muslim images were also highly circulated after the election, according to the Southern Poverty Law Center. Between November 8th and December 8th, “more than 1,750 unique photos and memes were distributed.” Some of the anti-Muslim content was directed at foreign leaders like Angela Merkel and London Mayor Sadiq Khanas, while others “were clearly an attempt to demonize Islam and Muslims.”
Jonathan Weisman, the deputy Washington editor of the New York Times, quit Twitter in June after receiving a flood of antisemitic abuse and threats online. In an op-ed published after he quit Twitter, Weisman explained, "For weeks, I had been barraged on Twitter by rank anti-Semitic comments, Nazi iconography of hook-nosed Jews stabbing lovely Christians in the back, the gates of Auschwitz, and trails of dollar bills leading to ovens."
Though he reported the harassment, it wasn't until Weisman announced on his Twitter account that he would be leaving that he began receiving notices from Twitter that "many of the accounts [he] had flagged had been suspended." However, many of the accounts that had targeted Weisman, including ones that contained explicitly antisemitic or neo-Nazi imagery, were not found to be abusive.
Another high-profile incident in the past year was the racist attacks leveled against Leslie Jones, one of the stars of the new “Ghostbusters” movie. After a seemingly endless stream of sexually explicit images, racist comments, and hateful memes were sent to Jones, she too quit Twitter. It was at this point that Twitter finally took action and barred one of the perpetrators of the online hate campaign against Jones, Milo Yiannopoulos, a technology editor for Breitbart news site.
European countries are also seeing a rising spread of hateful rhetoric online. Many social media sites, including Facebook and Twitter, already have community standards or rules that ban hate speech, harassment, and hateful conduct. But this year’s intensity led to increased questioning of the role of tech and social media in addressing online hate.
In Europe, where speech is more restricted than in the United States, several law suits have surfaced. In France, a Jewish youth group sued Twitter, Facebook, and Google over their monitoring of hate speech, seeking clarity on how posts are monitored by the tech companies. The case was postponed to allow the parties to come to an agreement.
Chan-Jo Jun, a lawyer from Germany, filed a complaint against Mark Zuckerberg and other senior Facebook executives "for failing to staunch a tide of racist and threatening posts...during an influx of migrants into Europe." Jun’s earlier suit against Zuckerberg was dismissed for lack of jurisdiction.
German government officials have also threatened action against the social media company, including the possibility of being held "criminally liable for illegal hate speech posts." Government pressure on Facebook increased after a far-right group posted a list of Jewish and Israeli owned businesses in Berlin on the 78th anniversary of Kristallnacht in November.
When the map was reported, Facebook initially declined to take it down, saying that it complied with its “community standards,” which are terms users agree to when they sign up to use Facebook. The map, and the group posting it, were eventually removed from Facebook because of pressure from the German government and the public.
Social media companies like Facebook and Twitter are making efforts to respond to the demand for accountability in this matter. Facebook collaborated with civil society to form the Initiative for Civil Courage Online, a project intended to amplify the voices of those spreading positive messages of tolerance.
Twitter, Facebook, YouTube, and Microsoft signed the European Union Code of Conduct in May of this year, aimed at preventing the spread of illegal hate speech. Tech companies were given a harsh review by the European Commission in their first evaluation of compliance with the Code. In early December, the EC reported that despite a provision in the code to review the majority of flagged content within 24 hours, only 40 percent of flagged content was reviewed in that time period.
The increased engagement of tech companies with government and civil society on these issues is a promising first step. However, there needs to be more consultation with a diverse group of civil society actors. These dialogues must seriously address the right to free expression and the roles and responsibilities of tech companies. For instance, while the E.U. Code of Conduct was successful for engaging tech companies, the eventual code was criticized for insufficient civil society consultation, threatening free expression, and blurring the line between law and the rules set by companies themselves outside of a democratically accountable process.
With high-profile elections in Europe next year, this issue seems unlikely to go away in 2017. In the coming year, dialogue between civil society, governments, and tech companies must continue. The tech sector should stay engaged with civil society to build digital literacy and promote counter-narratives. And when removing prohibited content, it should be done according to clear and transparent rules that comply with human rights principles, are drafted and constantly refined through community consultation, and enforced in a fair, timely manner, across the board.