記事
Commentary: Automation tools lack ability to assess context or intent risking limits to speech
"Automation and illegal content: can we rely on machines making decisions for us?", 17 February 2020
Because a large quantity of internet content is hosted by online platforms, [companies] have to rely on automated tools to find and tackle different categories of illegal or potentially harmful content... While automation is necessary for handling a vast amount of content... it makes mistakes that can be far-reaching for your rights and the well-being of society [including]...
- Contextual blindness of automated measures silences legitimate speech... Automated decision-making tools lack an understanding of linguistic or cultural differences... [causing the tools to] flag and remove content that is completely legitimate... [J]ournalists, activists, comedians, artists, [and anyone] sharing... opinions and videos or pictures online risk being censored because internet companies are relying on these poorly working tools...
- Content recognition technologies cannot understand the meaning or intention of those who share a post on social media or the effect it has on others... [T]heir ability to automate the very sensitive task of judging whether something constitutes hate speech will always be fundamentally limited.
We can use [automation tools]... to lessen the burden on platforms, but we need safeguards that ensure that we don’t sacrifice our human rights freedom because of poorly trained automated tools.