abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

Cette page n’est pas disponible en Français et est affichée en English

Article

11 mar 2021

Auteur:
Karen Hao, MIT Technology Review

Facebook's AI algorithms make misinformation & hate speech hard to uproot

"How Facebook got addicted to spreading misinformation", 11 March 2021

... The models that maximize engagement also favor controversy, misinformation, and extremism: put simply, people just like outrageous stuff. Sometimes this inflames existing political tensions. The most devastating example to date is the case of Myanmar, where viral fake news and hate speech about the Rohingya Muslim minority escalated the country’s religious conflict into a full-blown genocide. Facebook admitted in 2018, after years of downplaying its role, that it had not done enough “to help prevent our platform from being used to foment division and incite offline violence.”

... To catch things before they go viral, content-moderation models must be able to identify new unwanted content with high accuracy. But machine-learning models do not work that way. An algorithm that has learned to recognize Holocaust denial can’t immediately spot, say, Rohingya genocide denial. It must be trained on thousands, often even millions, of examples of a new type of content before learning to filter it out. Even then, users can quickly learn to outwit the model by doing things like changing the wording of a post or replacing incendiary phrases with euphemisms, making their message illegible to the AI while still obvious to a human. This is why new conspiracy theories can rapidly spiral out of control, and partly why, even after such content is banned, forms of it can persist on the platform.

... When I described the Responsible AI team’s work to other experts on AI ethics and human rights, they noted the incongruity between the problems it was tackling and those, like misinformation, for which Facebook is most notorious... “It seems like the ‘responsible AI’ framing is completely subjective to what a company decides it wants to care about. It’s like, ‘We’ll make up the terms and then we’ll follow them,’” says Ellery Roberts Biddle, the editorial director of Ranking Digital Rights... “I don’t even understand what they mean when they talk about fairness. Do they think it’s fair to recommend that people join extremist groups, like the ones that stormed the Capitol? If everyone gets the recommendation, does that mean it was fair?”

Chronologie

Informations sur la confidentialité

Ce site utilise des cookies et d'autres technologies de stockage web. Vous pouvez définir vos choix en matière de confidentialité ci-dessous. Les changements prendront effet immédiatement.

Pour plus d'informations sur notre utilisation du stockage web, veuillez vous référer à notre Politique en matière d'utilisation des données et de cookies

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

Cookie analytique

ON
OFF

Lorsque vous accédez à notre site Web, nous utilisons Google Analytics pour collecter des informations sur votre visite. Autoriser ce cookie nous permettra de comprendre en plus de détails sur votre parcours et d'améliorer la façon dont nous diffusons les informations. Toutes les informations analytiques sont anonymes et nous ne les utilisons pas pour vous identifier. Outre la possibilité que vous avez de refuser des cookies, vous pouvez installer le module pour la désactivation de Google Analytics.

Cookies promotionels

ON
OFF

Nous partageons des nouvelles et des mises à jour sur les entreprises et les droits de l'homme via des plateformes tierces, y compris les médias sociaux et les moteurs de recherche. Ces cookies nous aident à comprendre les performances de ces items.

Vos choix en matière de confidentialité pour ce site

Ce site utilise des cookies et d'autres technologies de stockage web pour améliorer votre expérience au-delà des fonctionnalités de base nécessaires.