Twitter releases policy banning dehumanizing speech on platform

In September 2018, Twitter announced a new policy banning dehumanizing speech on the social media platform. The company's policy is "You may not dehumanize anyone based on membership in an identifiable group, as this speech can lead to offline harm." Definitions of identifable groups and dehumanisation are included in the policy. Civil society has critizied Twitter for being too narrow in its definitions, which are largely aimed at religious groups, noting that many other groups have a long history of being dehumanized. Twitter joins Facebook, Youtube and other online platforms in trying to limit hate speech online. 

Get RSS feed of these results

All components of this story

Article
9 July 2019

Civil society criticize Twitter's speech policy for not protecting all groups

Author: Kate Conger, New York Times

"Twitter Backs Off Broad Limits on 'Dehumanizing' Speech", 9 July 2019

[On 9 July, 2019]... Twitter rolled out its first official guidelines around what constitutes dehumanizing speech on its service... “While we have started with religion, our intention has always been and continues to be an expansion to all protected categories,” Jerrel Peterson, Twitter’s head of safety policy, said... “We get one shot to write a policy that has to work for 350 million people who speak 43-plus languages while respecting cultural norms and local laws... We realized we need to be really small and specific.” ... Twitter has focused its removal policies on posts that may directly harm an individual, such as threats of violence or messages that contain personal information or nonconsensual nudity. Under the new rules, the company is adding a sentence that says users “may not dehumanize groups based on their religion, as these remarks can lead to offline harm.”...Rashad Robinson, the president of Color of Change, a civil rights group, said... "Dehumanization is a great start, but if dehumanization starts and stops at religious categories... that does not encapsulate all the ways people have been dehumanized"... In October and November [2019]... Twitter... [narrowed] down to groups that are protected under civil rights law, such as women, minorities and L.G.B.T.Q. people...  Those include the ethnic cleansing of Rohingya Muslims in Myanmar, which was preceded by hate campaigns on social networks like Facebook... The company prepared a feature to preserve tweets from world leaders... even if they engaged in dehumanizing speech. Twitter reasoned that such posts were in the public interest.

 

Read the full post here

Article
25 September 2018

Twitter expand hate speech policy to include dehumanizing language

Author: Vijaya Gadde and Del Harvey, Twitter

"Creating new policies together", 25 September 2018

[W]e have been developing a new policy to address dehumanizing language on Twitter. Language... can have repercussions off the service, including normalizing serious violence. Some of this content falls within our hateful conduct policy (which prohibits the promotion of violence against or direct attacks or threats against other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease)... [W]e want to expand our hateful conduct policy to include content that dehumanizes others based on their membership in an identifiable group, even when the material does not include a direct target.

Twitter’s Dehumanization Policy

You may not dehumanize anyone based on membership in an identifiable group, as this speech can lead to offline harm.

Definitions

Dehumanization: Language that treats others as less than human. Dehumanization can occur when others are denied of human qualities (animalistic dehumanization) or when others are denied of their human nature (mechanistic dehumanization). Examples can include comparing groups to animals and viruses (animalistic), or reducing groups to a tool for some other purpose (mechanistic).

Identifiable group: Any group of people that can be distinguished by their shared characteristics such as their race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, serious disease, occupation, political beliefs, location, or social practices.

Read the full post here

Article
25 September 2018

Twitter Releases New Policy on 'Dehumanizing Speech'

Author: Louise Matsakis, Wired

Twitter... announced a new policy... prohibit[ing] “content that dehumanizes others based on their membership in an identifiable group, even when the material does not include a direct target.” ...“The dehumanizing content and the dehumanizing behavior is one of the areas that really makes up a significant chunk of those reports,” says Del Harvey, Twitter’s vice president of trust and safety... “Dehumanization is important since it leads to real harm; it's just challenging to define precisely, and it's critical to protect freedom of speech as well,” says [Susan Benesch founder of the Dangerous Speech Project]... “Not all dangerous speech has dehumanizing language, and not all comparisons of human beings with animals are dehumanizing,” says Benesch. “Twitter and other platforms should be careful not to define dehumanization too broadly. For example, it’s tempting to say that any demeaning remark about a group of people, such as ‘the X people are all thieves’ or ‘all corrupt’ is dehumanizing. That one is not dehumanizing, since corruption is a specialty of humans.” ...[R]eal-world incidents have proved challenging for other social media companies to police effectively... Facebook, for instance, has been accused of helping to facilitate the Muslim Rohingya crisis in Myanmar, which the UN is now calling to prosecute as genocide.

Read the full post here