abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

Esta página no está disponible en Español y está siendo mostrada en English

Artículo

17 Feb 2020

Autor:
Eliska Pirkova, Access Now

Commentary: Automation tools lack ability to assess context or intent risking limits to speech

"Automation and illegal content: can we rely on machines making decisions for us?", 17 February 2020

Because a large quantity of internet content is hosted by online platforms, [companies] have to rely on automated tools to find and tackle different categories of illegal or potentially harmful content... While automation is necessary for handling a vast amount of content... it makes mistakes that can be far-reaching for your rights and the well-being of society [including]...

  1. Contextual blindness of automated measures silences legitimate speech... Automated decision-making tools lack an understanding of linguistic or cultural differences... [causing the tools to] flag and remove content that is completely legitimate... [J]ournalists, activists, comedians, artists, [and anyone] sharing... opinions and videos or pictures online risk being censored because internet companies are relying on these poorly working tools... 
  2. Content recognition technologies cannot understand the meaning or intention of those who share a post on social media or the effect it has on others... [T]heir ability to automate the very sensitive task of judging whether something constitutes hate speech will always be fundamentally limited.

We can use [automation tools]... to lessen the burden on platforms, but we need safeguards that ensure that we don’t sacrifice our human rights freedom because of poorly trained automated tools.