By the Blouin News Technology staff

Facebook under fire for hate speech

by in Media Tech.

Courtesy of Women, Action & Media's website.

Courtesy of Women, Action & Media’s website.

Facebook has drawn heavy criticism over the years for its negligence in maintaining filters for race- and gender-based hate groups, but the latest action by several activist organizations has thrown a renewed light on the social network’s hate speech filter practices – or lack thereof.

Women, Action, & the Media – a non-profit group known as WAM, active in driving “gender justice” in media – issued an open letter in conjunction with the group Everyday Sexism to Facebook on May 21 asking for the removal of content promoting violence against women and girls, and the banning of gender-based hate speech across the site. The letter made three pointed requests:

1. Recognize speech that trivializes or glorifies violence against girls and women as hate speech and make a commitment that you will not tolerate this content.

2. Effectively train moderators to recognize and remove gender-based hate speech.

3. Effectively train moderators to understand how online harassment differently affects women and men, in part due to the real-world pandemic of violence against women.

Thus ensued a massive social media-based campaign from non-profit organizations in media awareness such as Miss Representation, and global movements dedicated to ending violence against women including VDAY. The hashtag “#FBrape” spread on Twitter as users signed petitions and posted articles demanding the violent content be removed. WAM noted that an onslaught of 60,000 tweets and 5,000 emails were sent during the week-long campaign, all of which culminated in Facebook’s official aknowledgement at its failure to “identify and remove” hate speech on May 28. The social network promises to repair several features of its content operations, as the company noted in a blog post:

We have been working over the past several months to improve our systems to respond to reports of violations, but the guidelines used by these systems have failed to capture all the content that violates our standards. We need to do better – and we will.

No doubt, Facebook’s attention to this issue escalated as brands including automaker Nissan pulled their advertisements from the site. Reports note that at least 13 brands removed ads as companies worried that they would appear on pages associated with the offensive content. But whether or not Facebook addressed the complaints of WAM and others because of the deluge of demands or because of its ad losses, the debacle highlights the challenges facing proponents of the open internet when defining hate speech for removal versus preserving free content. Social sites have historically dealt with either private group petitions for removal of content, or government summons – in the case of Twitter – which result in the questioning of how responsible free online networks should be for the offensive nature of their users’ content. In the case of Facebook, petitioners claim that, as a for-profit company, the site must be responsible for its content in order to expect a growing ad business. But drawing the line between hate speech and “clear attempts at humor or satire that might otherwise be considered a possible threat or attack” – which Facebook explicitly allows through its official website – promises to be an ongoing challenge for the network that hosts 1 billion users.