Facebook & Twitter allegedly taking insufficient action to stop spread of hate speech & misinformation through their platforms
All components of this story
Author: Emily Birnbaum, The Hill
"Warren turns up heat over Facebook's ad rules," 15 Oct 2019
Sen. Elizabeth Warren (D-Mass.), a top-tier Democratic presidential candidate, is turning up the heat in her battle with one of the most powerful tech companies in the world, Facebook, as she shines a spotlight on the company’s rules on political ads... Critics have argued that Facebook is abdicating responsibility over its powerful platform, which reaches more than 2 billion people globally, while the company and free speech advocates have insisted it’s risky for Facebook to take more control over what political candidates are allowed to say. “The policies they’ve announced are an explicit invitation to politicians to spread falsehoods,” Paul Barrett, the deputy director of the New York University’s Stern Center for Business and Human Rights, told The Hill. “And that is not something that we ought to applaud.”
... Facebook has emphasized that it believes politicians should be exempt from many of its rules on speech. Facebook runs a third-party fact-checking program, which adds disclaimers to posts that can be proven false, but it now says that politicians’ posts and advertisements will not go through that system.
Author: Alex Hern, The Guardian
Facebook has quietly rescinded a policy banning false claims in advertising...The social network had previously banned adverts containing “deceptive, false or misleading content"... [But] the rules have narrowed considerably, only banning adverts that “include claims debunked by third-party fact-checkers, or, in certain circumstances, claims debunked by organisations with particular expertise”... Facebook [has] clarified that only politicians currently in office or running for office, and political parties, are exempt: other political adverts still need to be true... A Facebook spokesman said: “We don’t believe that it’s an appropriate role for us to referee political debates. Nor do we think it would be appropriate to prevent a politician’s speech from reaching its audience and being subject to public debate and scrutiny.”
... Facebook’s decision comes as the rival service TikTok takes the opposite stance... “Any paid ads that come into the community need to fit the standards for our platform, and the nature of paid political ads is not something we believe fits the TikTok platform experience,” wrote Blake Chandlee, the company’s vice-president of global business solutions. “To that end, we will not allow paid ads that promote or oppose a candidate, current leader, political party or group, or issue at the federal, state, or local level – including election-related ads, advocacy ads, or issue ads.”
Author: Kevin Stankiewicz, CNBC
Former Twitter CEO Dick Costolo told CNBC on Monday that the social media company should give different sharing permissions to different types of accounts as a way to improve discourse on its platform. “You have to start treating all these accounts differently,” Costolo said... “You’ve got high authority accounts, like newspaper accounts … that may be allowed to tweet things that a user that just signed up yesterday and has zero followers shouldn’t.”... Twitter has recently shown a willingness to differentiate among accounts, announcing in June a new feature that would label tweets from influential government officials who violate its content policies instead of taking the posts down.
... Twitter and other social media companies such as Facebook have faced scrutiny over the way they regulate — or fail to regulate — content. Critics argue the companies should do more to crack down on discriminatory and offensive content. Others believe the platforms should not be restrictive, and some argue they should apply the free speech standards of the First Amendment, which applies to how government entities regulate speech, not publicly traded companies.
Author: NPR (US)
“Twitter Removes Thousands Of Accounts For Manipulating Its Platform”, 20 Sep 2019
… Twitter permanently suspended thousands of accounts in its ongoing effort to fight the spread of disinformation and political discord on its platform, the company announced… The Twitter accounts came from the United Arab Emirates, Egypt, Saudi Arabia, Spain, Ecuador and China... Groups of suspended accounts were involved in various information campaigns, using tactics like spreading content through fake accounts and spamming through retweets.
The accounts were suspended for violating Twitter's policy on platform manipulation, which Twitter defines as large-scale aggressive or deceptive activity that misleads or disrupts people's social media activity. Twitter has been suspending or removing accounts linked to this sort of activity throughout the year. In August, the company suspended around 200,000 accounts it reported were used to discredit pro-democracy protests in Hong Kong. So far, the companies' progress has been slow, said Nina Jankowicz, a global fellow at the Wilson Center's Kennan Institute in Washington, D.C. She said shutting down disinformation campaigns will take both tech-based solutions and educating people through digital literacy…
Twitter didn't just suspend or remove the accounts. The company also put many of them into an archive of millions of tweets the platform identified as part of "state-backed information operations."
Author: Professor John Ruggie, The New York Times
"Should I quit Facebook? It's complicated," 28 Nov 2018
S. Matthew Liao... absolves Facebook of any responsibility for its role in the ethnic cleansing of the Muslim Rohingya population in largely Buddhist Myanmar. Hate speech and incitement to violence on Facebook helped drive this genocidal campaign. Mr. Liao reasons that “Facebook did not intend for those things to occur on its platform.” The problem with this “intentionality” standard is that press reports and direct appeals repeatedly warned Facebook, first about the risks and then the actual events. Under prevailing international human rights norms, knowingly continuing to allow the vitriol to be posted turns Facebook into a “contributor” to the heinous acts themselves.
... On Nov. 5, [Facebook] issued an independent human rights impact assessment of its role in Myanmar. In an accompanying blog, Alex Warofka, a Facebook policy product manager, stated that “we weren’t doing enough to help prevent our platform from being used to foment division and incite offline violence.” Facebook should now align its policies and practices with prevailing international human rights norms.
Author: S. Matthew Liao, The New York Times
From the perspective of one’s duties to others, the possibility of a duty to leave Facebook arises once one recognizes that Facebook has played a significant role in undermining democratic values around the world. For example, Facebook has been used to spread white supremacist propaganda and anti-Semitic messages in and outside the United States. The United Nations has blamed Facebook for the dissemination of hate speech against Rohingya Muslims in Myanmar that resulted in their ethnic cleansing... [D]o we have an obligation to leave Facebook for others’ sake? The answer is a resounding yes for those who are intentionally spreading hate speech and fake news on Facebook. For those of us who do not engage in such objectionable behavior, it is helpful to consider whether Facebook has crossed certain moral “red lines"... Facebook would have crossed a moral red line if it had, for example... intentionally assisted in the dissemination of hate speech in Myanmar. But the evidence indicates that Facebook did not intend for those things to occur on its platform... we should not place the responsibility to uphold democratic values entirely on Facebook. As moral agents, we should also hold ourselves responsible for our conduct... For now I’m going to stay on Facebook. But if new information suggests that Facebook has crossed a moral red line, we will all have an obligation to opt out.
Professor John Ruggie calls upon Facebook to make significant changes to align its practices with the UNGPs & prevent it being used to incite violence
Author: John G. Ruggie, Harvard University John F. Kennedy School of Government
"Facebook in the rest of the world," 15 November 2018
On the eve of the recent closely watched US mid-term elections Facebook released a human rights impact assessment of its possible role in the ethnic cleansing of that country’s Muslim Rohingya population... A Facebook blog announcing the report states that “we weren’t doing enough to help prevent our platform from being used to foment division and incite offline violence.”... We find comparable Facebook involvement in murderous incitement and misinformation in other countries, including Egypt after the Arab spring, India, Philippines, Sri Lanka, and Ukraine... CEO Mark Zuckerberg [said] at a US Senate hearing on US electoral ‘meddling’: “it's clear now that we didn't do enough to prevent these tools from being used for harm.”... In the blog announcing the Myanmar report, Alex Worka, Policy Product Manager states: “We agree that we can and should have done more.”
... In committing to do more, Facebook has indicated that in future its practices will be “consistent with” the UN Guiding Principles on Business and Human Rights... [P]ersistent refusal to substantially change what the company does to reduce its role in others’ promotion of social strife and violence makes the attribution of ‘contribution’ inescapable. I welcome the steps Facebook has announced, including promising conduct consistent with the UN Guiding Principles. But much will have to change at the company, beginning with its business model.
Author: Jennifer Easterday & Hana Ivanhoe, OpenGlobalRights
The exponential growth of the ICT industry has had stark consequences in the form of human lives and livelihoods, usually of the world’s most vulnerable and marginalized populations—calling into question the industry’s “growth at all costs” approach to business... Social media is being weaponized by extremists and inadvertently utilized as a megaphone for amplifying hate speech by everyday people... [E]arlier this year, Sri Lanka again descended into violence as online rumors spurred deadly attacks by members of the Buddhist majority against Muslims... Over the course of three days in March, mobs burned mosques, Muslim homes, and Muslim-owned shops... In response, the government temporarily blocked social media, including Facebook and two other social media platforms Facebook owns, WhatsApp and Instagram.
... Despite repeated early warnings and flags of violent content, Facebook failed to delete offensive posts or take any sort of ameliorative action. It was only after Facebook’s services were blocked, officials said, that the company took notice. Even then, the company’s initial response was limited to the adoption of a voluntary internal policy whereby it would “downrank” false posts and work with third parties to identify posts for eventual removal... While there are a number of initiatives already in place to address human rights practices at ICT companies generally, some fairly robust company-specific CSR and human rights policies at leading ICT companies, and a couple IGO/NGO initiatives looking at best practices for corporate behavior in high-risk settings, we still lack a collaborative initiative tailored specifically to ICT companies doing business in high-risk settings.
We are deeply disturbed by the violence that occurred in Sri Lanka this past March. We want to make sure that Facebook is a place where people can express themselves and connect with their friends, families, and communities, and we know this requires that our platform is a place where people feel safe. That’s why our Community Standards have clear rules against hate speech and content that incites violence, and we remove such content as soon as we’re made aware of it... Our approach to hate speech and incitement to violence—especially in conflict and post-conflict environments—has evolved over time and continues to change... . In Sri Lanka specifically, we’re actively building up teams that deal with reported content, working with civil society and government to better understand local context and challenges, and building out our technical capabilities so that we can more proactively address abusive content on Facebook. [[We’re also carrying out an independent human rights impact assessment of Facebook’s role in Sri Lanka to help inform our approach.]]... [W]e’re committed to having the right policies, products, people, and partnerships in place to help keep our community in Sri Lanka and around the world safe.
- Related stories: Facebook & Twitter allegedly taking insufficient action to stop spread of hate speech & misinformation through their platforms Sri Lanka: Facebook used to fuel violence against Muslims; inc. company statement
- This is a response from the following companies: Facebook
Twitter does not permit hateful conduct, abuse, threats of violence, or targeted harassment on our service. These type of behaviors do not encourage free expression or foster open dialogue; they stifle them. As part of our overall health initiative, we are investing resources in personnel, policies, product, and operations to ensure we are promoting conversation and debate that is civic-minded, open, and healthy. We have brought on independent academics from Oxford and Leiden universities to hold our entire approach to account. However, this is not just a Twitter issue, it is a societal one... As our CEO Jack Dorsey stated in front of Congress in the U.S., serving the public conversation means disincentivizing abusive behaviors, removing automated attempts to deceive and promote disinformation at scale, and ensuring that when the public comes to our service, they gain a constructive, informed view of the world's conversation. We all have a part to play in this - we are committed to playing ours.
- Related stories: Facebook & Twitter allegedly taking insufficient action to stop spread of hate speech & misinformation through their platforms
- This is a response from the following companies: Twitter