abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

Diese Seite ist nicht auf Deutsch verfügbar und wird angezeigt auf English

Story

13 Jan 2020

Facebook announces new policy on deepfakes & other manipulated media

On 6 January, 2020 Facebooked announced a new policy addressing manipulated content on its platform. The policy sets out a clear, two prong criteria for removing media that has been: 

  1. Edited or synthesized in ways that aren't apparent to the average person; and
  2. Edited by artificial intelligence or machine learning that merges, replaces, or superimposes content onto a vide, making it appear to be authentic. 

The policy is an addition to the company's existing policies on nudity, graphic violence, voter suppression and hate speech. Critics of the policy, including members of civil society, argue that the policy is too narrow and still allows for content edited by simple tools to stay on the platform. Critics argue this type of content makes up the majority of manipulated media on the platform. Facebook noted that false information is flagged by a third party and viewers are notified before seeing content if it is false, but critics argued that this system was too slow and still not capable of stopping the widespread of misinformation seen on the platform.