Facebook announces new policy on deepfakes & other manipulated media

On 6 January, 2020 Facebooked announced a new policy addressing manipulated content on its platform. The policy sets out a clear, two prong criteria for removing media that has been: 

  1. Edited or synthesized in ways that aren't apparent to the average person; and
  2. Edited by artificial intelligence or machine learning that merges, replaces, or superimposes content onto a vide, making it appear to be authentic. 

The policy is an addition to the company's existing policies on nudity, graphic violence, voter suppression and hate speech. Critics of the policy, including members of civil society, argue that the policy is too narrow and still allows for content edited by simple tools to stay on the platform. Critics argue this type of content makes up the majority of manipulated media on the platform. Facebook noted that false information is flagged by a third party and viewers are notified before seeing content if it is false, but critics argued that this system was too slow and still not capable of stopping the widespread of misinformation seen on the platform. 

Get RSS feed of these results

All components of this story

Article
13 January 2020

Facebook announces ban on manipulated media

Author: Monika Bickert, Vice President, Global Policy Management, Facebook

"Enforcing Against Manipulated Media", 6 January 2020

[W]e are strengthening our policy toward misleading manipulated videos that have been identified as deepfakes. Going forward, we will remove misleading manipulated media if it meets the following criteria:

  • It has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And:
  • It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.

This policy does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words... Consistent with our existing policies, audio, photos or videos, whether a deepfake or not, will be removed from Facebook if they violate any of our other Community Standards including those governing nudity, graphic violence, voter suppression and hate speech... If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution... and reject it if it’s... an ad... [P]eople who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false... By leaving [manipulated videos] up and labelling them as false, we’re providing people with important information and context.

Read the full post here

Article
13 January 2020

Facebook's new policy on misleading information excludes content posted by politician

"Facebook bans 'deepfake' videos in run-up to US election", 7 January 2020

[Facebook's] policy explicitly covers only misinformation produced using AI... “shallow fakes” – videos made using conventional editing tools – are still allowed on the platform... The most damaging examples of manipulated media in recent years have tended to be created using simple video-editing tools... The company... has a separate policy that allows any content that breaks its other rules to remain online if it is judged “newsworthy”... [A]ll content posted by politicians is automatically seen as such... “If someone makes a statement or shares a post which breaks our community standards we will still allow it on our platform if we believe the public interest in seeing it outweighs the risk of harm,” said Nick Clegg, Facebook’s vice-president of global affairs and communications... “From now on we will treat speech from politicians as newsworthy content that should, as a general rule, be seen and heard.” That policy means that even an AI-created deepfake video expressly intended to mislead could still remain on the social network, if it was posted by a politician... Facebook did not give a reason as to why it limited its policy exclusively to those videos manipulated using AI tools, but it is likely that the company wanted to avoid putting itself in a situation where it had to make subjective decisions about intent or truth.

Read the full post here