abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

このページは 日本語 では利用できません。English で表示されています

記事

21 12月 2018

著者:
Amnesty International

Troll Patrol findings: Using crowdsourcing, data science & machine learning to measure violence & abuse against women on Twitter

These findings are the result of a collaboration between Amnesty International and Element AI,  a global artificial intelligence software product company. Together, we surveyed millions of tweets received by 778 journalists and politicians from the UK and US throughout 2017 representing a variety of political views, and media spanning the ideological spectrum... Amnesty International has repeatedly urged Twitter to publicly share comprehensive and meaningful information about reports of violence and abuse against women, as well as other groups, on the platform, and how they respond to it. On 12 December 2018 Twitter released an updated Transparency Reportin which it included for the first time a section on 'Twitter Rules Enforcement'. This was one of Amnesty International’s key recommendations to Twitter and we see the inclusion of this data as an encouraging step. We are disappointed, however, that the information provided in the transparency report does not go far enough... Our study found that 7.1% of tweets sent to the women in the study were problematic or abusive. This amounts to 1.1 million problematic or abusive mentions of these 778 women across the year, or one every 30 seconds on average. Women of colour were more likely to be impacted - with black women disproportionately targeted with problematic or abusive tweets.

... Amnesty International and Element AI’s experience using machine learning to detect online abuse against women highlights the risks of leaving it to algorithms to determine what constitutes abuse... Human judgement by trained moderators remains crucial for contextual interpretation... Amnesty International’s full set of recommendations to Twitter are available here

タイムライン

プライバシー情報

このサイトでは、クッキーやその他のウェブストレージ技術を使用しています。お客様は、以下の方法でプライバシーに関する選択肢を設定することができます。変更は直ちに反映されます。

ウェブストレージの使用についての詳細は、当社の データ使用およびクッキーに関するポリシーをご覧ください

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

クッキーのアナリティクス

ON
OFF

When you access our website we use Google Analytics to collect information on your visit. Accepting this cookie will allow us to understand more details about your journey, and improve how we surface information. All analytics information is anonymous and we do not use it to identify you. Google provides a Google Analytics opt-out add on for all popular browsers.

Promotional cookies

ON
OFF

We share news and updates on business and human rights through third party platforms, including social media and search engines. These cookies help us to understand the performance of these promotions.

本サイトにおけるお客様のプライバシーに関する選択

このサイトでは、必要なコア機能を超えてお客様の利便性を高めるために、クッキーやその他のウェブストレージ技術を使用しています。