abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

このページは 日本語 では利用できません。English で表示されています

記事

2022年8月1日

著者:
Global Witness

Kenya: Report says Facebook has failed to control hate speech ahead of general election; company comments

"Facebook unable to detect hate speech weeks away from tight Kenyan election"

Despite the risk of violence around the upcoming Kenyan election, our new investigation conducted in partnership with legal non-profit Foxglove finds Facebook appallingly failed to detect hate speech ads in the two official languages of the country: Swahili and English.

This follows a similar pattern we uncovered in Myanmar and Ethiopia, but for the first time also raises serious questions about Facebook’s content moderation capabilities in English. Facebook itself has praised its “super-efficient AI models to detect hate speech” but our findings are a stark reminder of the risk of hate and incitement to violence on their platform. Even worse, in the lead up to a high stakes election, this is a time you would expect Facebook’s systems to be even more primed for safety...

When asked for comment on the investigation findings, a Meta spokesperson — Facebook’s parent company — responded to Global Witness [iii] that they’ve taken “extensive steps” to help Meta “catch hate speech and inflammatory content in Kenya” and that they’re “intensifying these efforts ahead of the election”. They state Meta has “dedicated teams of Swahili speakers and proactive detection technology to help us remove harmful content quickly and at scale”. Meta acknowledges that there will be instances where they miss things and take down content in error, “as both machines and people make mistakes.”

タイムライン