Facebook & Twitter allegedly taking insufficient action to stop spread of hate speech & incitement of violence through their platforms

Get RSS feed of these results

All components of this story

Article
30 November 2018

Commentary: Facebook should align its policies & practices with human rights norms

Author: Professor John Ruggie, The New York Times

"Should I quit Facebook? It's complicated," 28 Nov 2018

S. Matthew Liao... absolves Facebook of any responsibility for its role in the ethnic cleansing of the Muslim Rohingya population in largely Buddhist Myanmar. Hate speech and incitement to violence on Facebook helped drive this genocidal campaign. Mr. Liao reasons that “Facebook did not intend for those things to occur on its platform.” The problem with this “intentionality” standard is that press reports and direct appeals repeatedly warned Facebook, first about the risks and then the actual events. Under prevailing international human rights norms, knowingly continuing to allow the vitriol to be posted turns Facebook into a “contributor” to the heinous acts themselves.

... On Nov. 5, [Facebook] issued an independent human rights impact assessment of its role in Myanmar. In an accompanying blog, Alex Warofka, a Facebook policy product manager, stated that “we weren’t doing enough to help prevent our platform from being used to foment division and incite offline violence.” Facebook should now align its policies and practices with prevailing international human rights norms.

Read the full post here

Article
29 November 2018

Commentary: Do you have a moral duty to leave Facebook?

Author: S. Matthew Liao, The New York Times

From the perspective of one’s duties to others, the possibility of a duty to leave Facebook arises once one recognizes that Facebook has played a significant role in undermining democratic values around the world. For example, Facebook has been used to spread white supremacist propaganda and anti-Semitic messages in and outside the United States. The United Nations has blamed Facebook for the dissemination of hate speech against Rohingya Muslims in Myanmar that resulted in their ethnic cleansing... [D]o we have an obligation to leave Facebook for others’ sake? The answer is a resounding yes for those who are intentionally spreading hate speech and fake news on Facebook. For those of us who do not engage in such objectionable behavior, it is helpful to consider whether Facebook has crossed certain moral “red lines"... Facebook would have crossed a moral red line if it had, for example... intentionally assisted in the dissemination of hate speech in Myanmar. But the evidence indicates that Facebook did not intend for those things to occur on its platform... we should not place the responsibility to uphold democratic values entirely on Facebook. As moral agents, we should also hold ourselves responsible for our conduct... For now I’m going to stay on Facebook. But if new information suggests that Facebook has crossed a moral red line, we will all have an obligation to opt out.

Read the full post here

Article
20 November 2018

Professor John Ruggie calls upon Facebook to make significant changes to align its practices with the UNGPs & prevent it being used to incite violence

Author: John G. Ruggie, Harvard University John F. Kennedy School of Government

"Facebook in the rest of the world," 15 November 2018

On the eve of the recent closely watched US mid-term elections Facebook released a human rights impact assessment of its possible role in the ethnic cleansing of that country’s Muslim Rohingya population... A Facebook blog announcing the report states that “we weren’t doing enough to help prevent our platform from being used to foment division and incite offline violence.”... We find comparable Facebook involvement in murderous incitement and misinformation in other countries, including Egypt after the Arab spring, India, Philippines, Sri Lanka, and Ukraine... CEO Mark Zuckerberg [said] at a US Senate hearing on US electoral ‘meddling’: “it's clear now that we didn't do enough to prevent these tools from being used for harm.”... In the blog announcing the Myanmar report, Alex Worka, Policy Product Manager states: “We agree that we can and should have done more.”

... In committing to do more, Facebook has indicated that in future its practices will be “consistent with” the UN Guiding Principles on Business and Human Rights... [P]ersistent refusal to substantially change what the company does to reduce its role in others’ promotion of social strife and violence makes the attribution of ‘contribution’ inescapable. I welcome the steps Facebook has announced, including promising conduct consistent with the UN Guiding Principles. But much will have to change at the company, beginning with its business model.

Download the full document here

Article
19 November 2018

Commentary: Tech companies’ inability to control fake news exacerbates violent acts

Author: Jennifer Easterday & Hana Ivanhoe, OpenGlobalRights

The exponential growth of the ICT industry has had stark consequences in the form of human lives and livelihoods, usually of the world’s most vulnerable and marginalized populations—calling into question the industry’s “growth at all costs” approach to business... Social media is being weaponized by extremists and inadvertently utilized as a megaphone for amplifying hate speech by everyday people... [E]arlier this year, Sri Lanka again descended into violence as online rumors spurred deadly attacks by members of the Buddhist majority against Muslims... Over the course of three days in March, mobs burned mosques, Muslim homes, and Muslim-owned shops... In response, the government temporarily blocked social media, including Facebook and two other social media platforms Facebook owns, WhatsApp and Instagram.

... Despite repeated early warnings and flags of violent content, Facebook failed to delete offensive posts or take any sort of ameliorative action. It was only after Facebook’s services were blocked, officials said, that the company took notice. Even then, the company’s initial response was limited to the adoption of a voluntary internal policy whereby it would “downrank” false posts and work with third parties to identify posts for eventual removal... While there are a number of initiatives already in place to address human rights practices at ICT companies generally, some fairly robust company-specific CSR and human rights policies at leading ICT companies, and a couple IGO/NGO initiatives looking at best practices for corporate behavior in high-risk settings, we still lack a collaborative initiative tailored specifically to ICT companies doing business in high-risk settings.

Read the full post here

Company response
19 November 2018

Response from Facebook

Author: Facebook

We are deeply disturbed by the violence that occurred in Sri Lanka this past March. We want to make sure that Facebook is a place where people can express themselves and connect with their friends, families, and communities, and we know this requires that our platform is a place where people feel safe. That’s why our Community Standards have clear rules against hate speech and content that incites violence, and we remove such content as soon as we’re made aware of it... Our approach to hate speech and incitement to violence—especially in conflict and post-conflict environments—has evolved over time and continues to change... . In Sri Lanka specifically, we’re actively building up teams that deal with reported content, working with civil society and government to better understand local context and challenges, and building out our technical capabilities so that we can more proactively address abusive content on Facebook. [[We’re also carrying out an independent human rights impact assessment of Facebook’s role in Sri Lanka to help inform our approach.]]... [W]e’re committed to having the right policies, products, people, and partnerships in place to help keep our community in Sri Lanka and around the world safe.  

Download the full document here

Company response
19 November 2018

Response from Twitter

Author: Twitter

Twitter does not permit hateful conduct, abuse, threats of violence, or targeted harassment on our service. These type of behaviors do not encourage free expression or foster open dialogue; they stifle them. As part of our overall health initiative, we are investing resources in personnel, policies, product, and operations to ensure we are promoting conversation and debate that is civic-minded, open, and healthy. We have brought on independent academics from Oxford and Leiden universities to hold our entire approach to account. However, this is not just a Twitter issue, it is a societal one... As our CEO Jack Dorsey stated in front of Congress in the U.S., serving the public conversation means disincentivizing abusive behaviors, removing automated attempts to deceive and promote disinformation at scale, and ensuring that when the public comes to our service, they gain a constructive, informed view of the world's conversation. We all have a part to play in this - we are committed to playing ours.

Download the full document here

Article
14 November 2018

Delay, deny and deflect: How Facebook’s leaders fought through crisis

Author: Sheera Frankel, Nicholas Confessore, Cecilia Kang, Matthew Rosenberg & Jack Nicas, The New York Times

[A]s evidence accumulated that Facebook’s power could also be exploited to disrupt elections, broadcast viral propaganda and inspire deadly campaigns of hate around the globe, Mr. Zuckerberg and Ms. Sandberg stumbled... When Facebook users learned last spring that the company had compromised their privacy in its rush to expand, allowing access to the personal information of tens of millions of people to a political data firm linked to President Trump, Facebook sought to deflect blame and mask the extent of the problem... Facebook employed a Republican opposition-research firm to discredit activist protesters, in part by linking them to the liberal financier George Soros. 

... Facebook declined to make Mr. Zuckerberg and Ms. Sandberg available for comment. In a statement, a spokesman acknowledged that Facebook had been slow to address its challenges but had since made progress fixing the platform. “This has been a tough time at Facebook and our entire management team has been focused on tackling the issues we face,” the statement said. “While these are hard problems we are working hard to ensure that people find our products useful and that we protect our community from bad actors.”

Read the full post here

Article
18 July 2018

Facebook establishes new policy to remove misinformation that could lead to violence

Author: Sheera Frenkel, The New York Times

Facebook... said... that it would begin removing misinformation that could lead to people being physically harmed. The policy expands Facebook’s rules about what type of false information it will remove, and is largely a response to episodes in Sri Lanka, Myanmar and India in which rumors that spread on Facebook led to real-world attacks on ethnic minorities... “We have identified that there is a type of misinformation that is shared in certain countries that can incite underlying tensions and lead to physical harm offline,” said Tessa Lyons, a Facebook product manager. “We have a broader responsibility to not just reduce that type of content but remove it.”

Facebook has been roundly criticized over the way its platform has been used to spread hate speech and false information that prompted violence... In Myanmar, Facebook has been accused by United Nations investigators and human rights groups of facilitating violence against Rohingya Muslims... In Sri Lanka, riots broke out after false news pitted the country’s majority Buddhist community against Muslims... In an interview published Wednesday by the technology news site Recode, Mark Zuckerberg, Facebook’s chief executive [said]... “I think that there’s a terrible situation where there’s underlying sectarian violence and intention... It is clearly the responsibility of all of the players who were involved there.”... Under the new rules, Facebook said it would create partnerships with local civil society groups to identify misinformation for removal. The new rules are already being put in effect in Sri Lanka... The company has started identifying posts that are categorized as false by independent fact checkers. Facebook will “downrank” those posts... so that they are not highly promoted across the platform.

Read the full post here

Article
15 March 2018

Sri Lanka lifts ban on Facebook imposed after spasm of communal violence

Author: Shihar Aneez & Ranga Sirilal, Reuters

Sri Lankan officials said on Thursday they had lifted a ban on Facebook after discussions with the social network, a week after blocking access on the grounds it was being used to fuel communal violence. At least two people were killed in clashes last week when Sinhalese Buddhists, angered by the killing of a Buddhist driver by Muslims, attacked mosques and Muslim-owned properties. Some of the violence was instigated by threatening posts on Facebook, according to the government, which cut access to Facebook, Viber and WhatsApp on March 7. It initially said the ban would last for three days but extended the block without informing the public, users said... Government officials have said Facebook’s action against those who spread hate speech had been too slow. “Facebook officials agreed to speed up the response time,” [said] telecommunication minister Harin Fernando... Facebook Inc said in a statement to Reuters its officials met Sri Lankan government officials to outline the company’s community standards and commitment to removing hate speech and incitement to violence from its platform... “We have clear rules against such content, and will remove it when we’re made aware of it. We are glad access to our services, and important connections for people and businesses, have been restored,” the statement said.

Read the full post here

Article
7 March 2018

Sri Lanka shut down Facebook, WhatsApp, and Instagram to stop anti-Muslim violence

Author: Hanna Kozlowska, Quartz

The government of Sri Lanka has shut down Facebook, WhatsApp, Instagram, and Viber in an attempt to quell ethnic strife in the country. Authorities say people were using the social media platforms to stoke violence against Sri Lanka’s Muslim minority... The government has declared a state of emergency after riots in the district of Kandy have left two dead, and property destroyed.  “We have clear rules against hate speech and incitement to violence and work hard to keep it off our platform,” a Facebook spokesperson said in a statement. “We are responding to the situation in Sri Lanka and are in contact with the government and non-governmental organizations to support efforts to identify and remove such content.”