hide message

Welcome to the Resource Centre

We make it our mission to work with advocates in civil society, business and government to address inequalities of power, seek remedy for abuse, and ensure protection of people and planet.

Both companies and impacted communities thank us for the resources and support we provide.

This is only possible because of your support. Please make a donation today.

Thank you,
Phil Bloomer, Executive Director

Donate now hide message

New Zealand: Business leaders & govt. call on Facebook to do more to rid platform of extremist content after live streaming terrorist attack in mosques

Get RSS feed of these results

All components of this story

Article
28 March 2019

Facebook bans white nationalism from platform after pressure from civil rights groups

Author: David Ingram and Ben Collins, NBCnews

"Facebook bans white nationalism from platform after pressure from civil rights groups," 27 March 2019

[Facebook] said in a blog post Wednesday that conversations with academics and civil rights groups convinced the company to expand its policies around hate groups... Scrutiny of Facebook reached new heights in the past two weeks after a gunman in Christchurch, New Zealand, used Facebook to livestream his attacks on two mosques that killed 50 people... Facebook's policies [had previously] banned white supremacy but allowed white nationalism and white separatism... Facebook has previously taken action in the wake of race-based violence, removing links to a white supremacist website and taking down a page used to organize the "Unite The Right" rally in 2017...

"Facebook's update should move Twitter, YouTube, and Amazon to act urgently to stem the growth of white nationalist ideologies, which find space on platforms to spread the violent ideas and rhetoric that inspired the tragic attacks witnessed in Charlottesville, Pittsburgh, and now Christchurch," [said] Rashad Robinson, president of Color of Change. A Twitter representative Wednesday declined to say whether the company was considering adopting a similar change. Amazon and YouTube did not immediately respond to requests for comment... On Tuesday, Ime Archibong, Facebook's vice president of product partnerships, revealed some details about a new oversight board that the company is forming to provide guidance on its "most challenging and contentious content decisions" and "hold us publicly accountable if we don't get them right."... “The board, as currently envisioned, will consist of about 40 global experts with experience in content, privacy, free expression, human rights, journalism and safety."

Read the full post here

Article
28 March 2019

Facebook blog piece announces ban on white nationalism & separatism

Author: Facebook

"Standing Against Hate", 27 March 2019

We're announcing a ban on praise, support and representation of white nationalism and separatism on Facebook and Instagram, which we'll start enforcing next week... [O]ur conversations with members of civil society and academics...who are experts in race relations... have confirmed that white nationalism and separatism cannot be meaningfully separated from white supremacy and organized hate groups... Going forward, while people will still be able to demonstrate pride in their ethnic heritage, we will not tolerate praise or support for white nationalism and separatism... We also need to get better and faster at finding and removing hate from our platforms... We're making progress, but we know we have a lot more work to do... [W]e'll also start connecting people who search for terms associated with white supremacy to resources focused on helping people leave behind hate groups. People searching for these terms will be directed to Life After Hate, an organization founded by former violent extremists that provides crisis intervention, education, support groups and outreach... Our challenge is to stay ahead by continuing to improve our technologies, evolve our policies and work with experts who can bolster our own efforts. We are deeply committed and will share updates as this process moves forward.

Read the full post here

Article
28 March 2019

Facebook now bans white nationalism and separatism, not just white supremacy

Author: Hanna Kozlowska

Facebook is banning white nationalism and white separatism from its platforms, eliminating the controversial distinction it had historically drawn between those ideologies and white supremacy... Facebook has always prohibited white supremacy as an example of "hateful treatment of people based on characteristics such as race, ethnicity, or religion," the company explained. It did not extend this logic to white nationalism and separatism because it saw them as examples of the broader concepts of nationalism and separatism... Last year, this distinction was laid bare in training documents for Facebook content moderators that leaked to Motherboard. Experts and anti-hate groups critical of the policy pointed out that these ideologies overlap, and that the distinction was a technicality...The platform will connect people searching for terms associated with white supremacy with the group Life After Hate, which is run by former extremists and helps people leave hate groups.

Read the full post here

Article
27 March 2019

Facebook bans white nationalism & separatism content from its platforms

Author: Sasha Ingber, NPR

Facebook announced Wednesday that it intends to ban content that glorifies white nationalism and separatism, a major policy shift that will begin next week... Kristen Clarke, president and executive director of the National Lawyers' Committee for Civil Rights Under Law, tells NPR that "[f]or too long, Facebook has maintained a policy that carved out an indefensible distinction between white supremacy and white nationalism and white separatism, and that carve-out allowed violent white supremacists to openly exploit the platform to incite violence across the country and frankly across the globe."

... Vera Eidelman, staff attorney with the ACLU [ said], "White supremacist, nationalist and separatist views are repugnant, and Facebook as a private company is well within its rights to remove such hate and bigotry from its platform. Indeed, any content that crosses the line into incitement or true threats is not protected speech... [however] Facebook runs the risk of censoring those that attack white nationalism, too... every time Facebook makes the choice to remove content, a single company is exercising an unchecked power to silence individuals and remove them from what has become an indispensable platform... For the same reason that the Constitution prevents the government from exercising such power, we should be wary of encouraging its exercise by corporations that are answerable to their private shareholders rather than the broader public interest."

Read the full post here

Article
26 March 2019

Executives at Vodafone NZ, Spark & 2degrees call on CEOs of Facebook, Google & Twitter to take more responsibility over platform content

Author: CNN Business

"Read the letter New Zealand telecom executives sent to Facebook, Google & Twitter," 19 March 2019

Executives at three major internet service providers in New Zealand have written a letter to the CEOs of Facebook, Google and Twitter, asking them to take more responsibility over the content on their platforms... "You may be aware that on the afternoon of Friday 15 March, three of New Zealand's largest broadband providers, Vodafone NZ, Spark and 2degrees, took the unprecedented step to jointly identify and suspend access to web sites that were hosting video footage taken by the gunman related to the horrific terrorism incident in Christchurch. As key industry players, we believed this extraordinary step was the right thing to do in such extreme and tragic circumstances... Although we recognize the speed with which social network companies sought to remove Friday's video once they were made aware of it, this was still a response to material that was rapidly spreading globally and should never have been made available online... We call on Facebook, Twitter and Google, whose platforms carry so much content, to be a part of an urgent discussion at an industry and New Zealand Government level on an enduring solution to this issue... Social media companies and hosting platforms that enable the sharing of user generated content with the public have a legal duty of care to protect their users and wider society by preventing the uploading and sharing of content such as this video... Now is the time for this conversation to be had, and we call on all of you to join us at the table and be part of the solution.

Read the full post here

Company response
26 March 2019

Response from Facebook to letter from executives of Vodafone NZ, Speak & 2degrees

Author: Facebook

We continue to keep the people, families, and communities impacted by the tragedy in Christchurch in our hearts. Since the attack, we have been working closely with the New Zealand Police to respond to the attack and support their investigation. We removed the attacker’s video within minutes of the New Zealand Police’s outreach to us, and in the first 24 hours following the attack, removed more than 1.2 million copies of the attack video at upload using AI, preventing them from being seen on our services. Approximately 300,000 additional copies were removed after they were posted.

As we continue to work to support the New Zealand Police and to prevent the spread of this horrific content, we are also working to improve our proactive detection technology to more quickly and effectively detect content that violates our Community Standards while ensuring that people who use Facebook can engage in legitimate online expression. We’ve shared more details on our efforts at https://newsroom.fb.com/news/2019/03/technical-update-on-new-zealand/.

Download the full document here

Company response
25 March 2019

Response from Google to letter from executives of Vodafone NZ, Speak & 2degrees

Author: Google

Google and YouTube take issues of terrorist use of the internet very seriously. See here and here for posts that layout our overall approach to terrorist content. In addition, we are working in coalition with other companies through the Global Internet Forum to Counter Terrorism, to address these issues across platforms (see here for our various joint announcements). YouTube chaired GIFCT in 2018 -- Facebook is chairing for 2019.

Download the full document here

Company response
25 March 2019

Response from Twitter to letter from executives of Vodafone NZ, Speak & 2degrees

Author: Twitter

We are deeply saddened by the attack in Christchurch. Our hearts go out to the victims, their families, and everyone in the community affected by this tragedy. We are continuously monitoring and removing any content that depicts the tragedy, and will continue to do so in line with the Twitter Rules. We are also in close coordination with New Zealand law enforcement to help in their investigation.

If you see content that may break our rules, report it to us so we can take action. https://help.twitter.com/en/safety-and-security/report-a-tweet 

Download the full document here

Article
20 March 2019

Australian PM urges G20 for stronger regulations on social media

Author: CNBC News

"Australia's prime minister calls for global social media restrictions after Christchurch shootings", 19 March 2019

Australian Prime Minister Scott Morrison has called for a global crackdown on social media after footage of last Friday’s mosque attacks in Christchurch, New Zealand was livestreamed on Facebook, calling into question the extent to which the world’s biggest tech giants can successfully monitor their own platforms.

In a letter to Japan’s Prime Minister Shinzo Abe, Morrison asked the G-20 chair to make the issue central to the world leaders’ upcoming summit in Osaka in June...

...“It is imperative that the global community works together to ensure that technology firms meet their moral obligation to protect the communities which they serve and from which they profit.”

Read the full post here

Article
19 March 2019

New Zealand: Social media's artificial intelligence unable to stop spread of videos on mosque attacks

Author: Washington Post, New Zealand Herald

 "Christchurch mosque shootings: How social media's business model helped the massacre go viral"  20 March 2019

People celebrating the mosque attacks that left 50 people dead were able to keep posting and reposting videos on Facebook, YouTube and Twitter despite the websites' use of largely automated systems powered by artificial intelligence to block them... Those pushing videos of Friday's attack made small alterations...to evade detection by artificial-intelligence systems designed by some of the world's most technologically advanced companies to block such content.

Mia Garlick, the head of communications and policy for Facebook in Australia and New Zealand, said the company would "work around the clock to remove violating content using a combination of technology and people." Garlick said the company is also now even removing edited versions of the video that do not feature graphic violence. Twitter did not respond to a request for comment...and Reddit declined to comment, but both have described working hard over several days to remove objectionable content from the shooting. 

A YouTube executive...acknowledged that the platform's systems were overwhelmed and promised to make improvements. "We've made progress, but that doesn't mean we don't have a lot of work ahead of us, and this incident has shown that," said Neal Mohan, YouTube's chief product officer...Those who study social media say that slowing the spread of appalling videos might require the companies to change or limit some features that help spread stimulating... Stephen Merity, a machine learning researcher in San Francisco, said tech companies do not want to use more drastic measures, such as tougher restrictions on who can upload or bigger investments in content-moderation teams, because of how they could alter their sites' usability or business model.

Read the full post here