abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

这页面没有简体中文版本,现以English显示

文章

21 十二月 2018

作者:
Amnesty International

Troll Patrol findings: Using crowdsourcing, data science & machine learning to measure violence & abuse against women on Twitter

These findings are the result of a collaboration between Amnesty International and Element AI,  a global artificial intelligence software product company. Together, we surveyed millions of tweets received by 778 journalists and politicians from the UK and US throughout 2017 representing a variety of political views, and media spanning the ideological spectrum... Amnesty International has repeatedly urged Twitter to publicly share comprehensive and meaningful information about reports of violence and abuse against women, as well as other groups, on the platform, and how they respond to it. On 12 December 2018 Twitter released an updated Transparency Reportin which it included for the first time a section on 'Twitter Rules Enforcement'. This was one of Amnesty International’s key recommendations to Twitter and we see the inclusion of this data as an encouraging step. We are disappointed, however, that the information provided in the transparency report does not go far enough... Our study found that 7.1% of tweets sent to the women in the study were problematic or abusive. This amounts to 1.1 million problematic or abusive mentions of these 778 women across the year, or one every 30 seconds on average. Women of colour were more likely to be impacted - with black women disproportionately targeted with problematic or abusive tweets.

... Amnesty International and Element AI’s experience using machine learning to detect online abuse against women highlights the risks of leaving it to algorithms to determine what constitutes abuse... Human judgement by trained moderators remains crucial for contextual interpretation... Amnesty International’s full set of recommendations to Twitter are available here

时间线

隐私资讯

本网站使用 cookie 和其他网络存储技术。您可以在下方设置您的隐私选项。您所作的更改将立即生效。

有关我们使用网络存储的更多信息,请参阅我们的 数据使用和 Cookie 政策

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

分析 cookie

ON
OFF

您浏览本网页时我们将以Google Analytics收集信息。接受此cookie将有助我们理解您的浏览资讯,并协助我们改善呈现资讯的方法。所有分析资讯都以匿名方式收集,我们并不能用相关资讯得到您的个人信息。谷歌在所有主要浏览器中都提供退出Google Analytics的添加应用程式。

市场营销cookies

ON
OFF

我们从第三方网站获得企业责任资讯,当中包括社交媒体和搜寻引擎。这些cookie协助我们理解相关浏览数据。

您在此网站上的隐私选项

本网站使用cookie和其他网络存储技术来增强您在必要核心功能之外的体验。