Social media and hate speech go hand in hand, but with UK Prime Minister Theresa May’s accusation of the platforms as a “safe space” for terrorists, pressure has increased for these companies to crack down with social media monitoring software media and reporting possible terrorist activity.
“We cannot allow this ideology the safe space it needs to breed,” May said. “Yet that is precisely what the internet, and the big companies that provide internet-based services, provide.”
Many tech giants took this as an insult. They spoke out, outlining the existing policies they already have in place to identify and combat terrorist activity.
Facebook’s Director of Policy Simon Milner said he wants the site to be a “hostile platform for terrorists.” Twitter’s help center specifically says, “you may not make threats of violence or promote violence, including threatening or promoting terrorism.” Most sites also condemn any speech that condemns or insults a community based on race, ethnic origin, religion, disability, gender, age, veteran status, sexual orientation and gender identity.
“We employ thousands of people and invest hundreds of millions of pounds to fight abuse on our platforms and ensure we are part of the solution to addressing these challenges,” a Google Spokesperson said.
“The threat of abuse and harassment online means that many people stop expressing themselves and give up on seeking different opinions.”
Other sites and digital experts have speculated the effectiveness of similar platforms. Still, there is no conclusive evidence artificial intelligence (AI) software-powered bots or machine-learning software and algorithms could effectively locate and/or remove hate speech on the web.
Most of the counter-arguments have to do with the context and impact of each comment. Bots have a hard time differentiating between people discussing or condemning hate speech and people actually posting offensive content.
Until that technology improves, social media will continue to struggle combating hate speech, especially with the prevalence of echo chambers in online communities. People who have hateful opinions often socialize with people of similar opinions, and they likely won’t report content because they agree with it.
Last month, Twitter failed the European Union’s assessment of policing hate speech online. The EU asked social media sites to remove at least half of postings universally considered to be hate speech. But Twitter only removed about 40 percent of that content.
Despite their condemnation of Theresa May, many other sites have struggled with combating hate speech, according to the Simon Wiesenthal Center’s 2017 Digital Terrorism & Hate Report Card.
Twitter, Facebook and Google/YouTube were graded on both terrorism and hate, while other sites were simply evaluated for terrorism. Facebook scored highest for combating terrorism with an A- grade, while 8chan, a site blacklisted by Google for hosting child pornography, and Gab, deemed “The Alt-Right’s Twitter,” scored F grades.
Encrypted messaging apps were graded separately. Telegram scored highest with a B- and SureSpot scored lowest with an F. Those specifically were condemned by May, who asked for an end to encryption.
Critics, though, have denounced the inefficacy of May’s agenda. Michael Reilly of the MIT Technology Review referred to the notion that technology providers should create cryptographic holes as “ludicrous bordering on impossible.”
For efficiency’s sake, it would be nice to see organizations like Jigsaw take a step towards enacting some form of AI or machine-learning bot to effectively monitor for hate speech.
Social media companies don’t necessarily have the manpower to have a practical, human-powered solution. In the meantime, we’ll just have to rely heavily on one another to police the internet and report content that could potentially incite harm or perpetuate racism, radicalism and bigotry.
Want the latest cybersecurity news from around the web sent to your inbox once a week?