EU: Commission releases new recommendations for tech co's to combat extremist content online

Get RSS feed of these results

All components of this story

Article
29 March 2018

Commentary: EU should not make platforms the judges of free speech

Author: Nick Wallace, EU Observer

[A] recent threat by the European Commission to hold platforms responsible for their users' posts [c]ould [...] push... platforms to remove anything they are unsure about before any court has ruled it illegal...

On 1 March, the commission issued a recommendation [...] demanding that online platforms take "proactive measures" to remove all varieties of illegal content posted by users...

The commission threatened regulation if platforms do not comply.

But increasing platforms' liability would drastically alter the effects of the different national laws on free speech.

The threat of fines, combined with the grey areas inherent to laws limiting what people can say, would push platforms to remove content whenever in doubt, even where a court might let it remain online...

Naturally, platforms should comply with court orders and be punished when they do not, but to safeguard free speech, claims of hate speech, illegally "offensive" messages, or similar allegations should be judged in court on a case-by-case basis.

If the EU makes platforms liable for what their users post, then throughout Europe, the threat of fines will pressure platforms to delete all dubious content, [...] limiting opportunities to challenge allegations of hate speech, and stifling public debate.

Read the full post here

Article
1 March 2018

EU gives online platforms legal tool to justify takedowns

Author: Peter Teffer, EU Observer

The European Commission adopted a legal text on Thursday (1 March) which gives online platforms like Facebook and Twitter guidelines on when and how to take down illegal content like hate speech or terrorist propaganda...

The text, presented by no fewer than four EU commissioners, can be seen as something of a last chance for internet companies to make self-regulation work.

Security commissioner Julian King said the commission would monitor how the recommendation will play out in practice, and that the recommendation was about "sending a clear signal" to the internet companies...

The commissioners also stressed that the recommendation contained safeguards for protecting freedom of speech.

Users of platforms that have posted something which they believe is legal, should be able to appeal a company's decision to take down their content.

Read the full post here

Article
1 March 2018

EU: Commission releases new recommendations for tech co's to combat illegal content online

Author: Thuy Ong, The Verge

The European Commission has sent out expansive guidelines aimed at Facebook, Google, and other tech companies on removing terrorist and other illegal content online. The commission outlined recommendations, which apply to all forms of illegal content, including terrorist media, child sexual abuse, counterfeit products, copyright infringement, and material that incites hatred and violence. The recommendations also specify clearer procedures, more efficient tools, and stronger safeguards including human oversight and verification, so something that’s incorrectly flagged can be restored...

The commission is suggesting these operational measures as a soft law before it decides whether or not to propose legislation. The recommendations are non-binding, but they can still be used as legal references in court...

Facebook previously said it wants to be a “hostile place” for terrorists and is using a mix of AI and human intervention to root out terrorist content. YouTube also announced new steps last year including automated systems and additional flaggers to fight extremism on its platform. In 2016, Facebook, Twitter, Microsoft, and YouTube signed an EU code of conduct on countering hate speech online.

Read the full post here

Article
7 December 2017

EU warns tech firms: remove extremist content faster or be regulated

Author: Samuel Gibbs, The Guardian (UK)

The European Commission has warned Facebook, Google, YouTube, Twitter and other internet technology companies that they must do more to stem the spread of extremist content or face legislation.

Growing pressure from European governments has meant progress has been made by companies in significantly boosting their resources dedicated to help take down extremist content as quickly as possible.

But... [i]f the EU is not satisfied with the further progress on the removal of extremist content by technology companies, which are primarily based in the US, it said it will come forward with legislation next year to force the issue...

A group of technology companies pooling resources to combat extremist content called the Global Internet Forum – a group which includes Microsoft, Facebook, Twitter and YouTube – said that progress had been made

Read the full post here

Article
5 December 2017

Facebook bans women for posting 'men are scum' after harassment scandals

Author: Samuel Gibbs, The Guardian (UK)

In the wake of the multiple sexual harassment and abuse scandals across the globe, Facebook has been suspending women for “hate speech” against men after posting variations of the phrase “men are scum”.

Despite Facebook’s chief operating officer Sheryl Sandberg warning of a potential backlash against women as scandals rock companies and political institutions, the social network continues to ban women speaking out against men as a group...

Facebook says that threats and hate speech directed towards a protected group violate its community standards and therefore are removed. The social network told the Daily Beast that “men are scum” was a threat and therefore should be removed...

A Facebook spokesperson told the Guardian: “We understand how important it is for victims of harassment to be able to share their stories and for people to express anger and opinions about harassment — we allow those discussions on Facebook. We draw the line when people attack others simply on the basis of their gender.”

Read the full post here

Article
5 December 2017

Google to hire thousands of moderators after outcry over YouTube abuse videos

Author: Sam Levin, The Guardian (UK)

Google is hiring thousands of new moderators after facing widespread criticism for allowing child abuse videos and other violent and offensive content to flourish on YouTube.

...The news from YouTube’s CEO, Susan Wojcicki, followed a steady stream of negative press surrounding the site’s role in spreading harassing videosmisinformationhate speech and content that is harmful to children.

Wojcicki said that in addition to an increase in human moderators, YouTube is continuing to develop advanced machine-learning technology to automatically flag problematic content for removal. The company said its new efforts to protect children from dangerous and abusive content and block hate speech on the site were modeled after the company’s ongoing work to fight violent extremist content...

The statement also said YouTube was reforming its advertising policies, saying it would apply stricter criteria, conduct more manual curation and expand its team of ad reviewers.

Read the full post here