hide message

Welcome to the Resource Centre

We make it our mission to work with advocates in civil society, business and government to address inequalities of power, seek remedy for abuse, and ensure protection of people and planet.

Both companies and impacted communities thank us for the resources and support we provide.

This is only possible because of your support. Please make a donation today.

Thank you,
Phil Bloomer, Executive Director

Donate now hide message

WITNESS Media Lab report highlights challenges of AI-manipulated media & argues for better data sharing

Author: Sam Gregory & Eric French, WITNESS Media Lab, Published on: 27 June 2019

"How do we work together to detect AI-manipulated media?", 01 Jun 2019

New artificial intelligence-based tools are increasingly able to create convincing simulations of authentic media, including sophisticated audio and video manipulations called “deepfakes,” and more subtle manipulations. These media forms have the potential to amplify, expand, and alter existing problems around trust in information, verification of media, and weaponization of online spaces…. [W]e see a critical need to bring together key actors before we are in the eye-of-the-storm.….  [This report focuses on] 1) Identifying how emerging approaches to detecting deepfakes and synthetic media manipulation are accessible and understandable to a community that will use them in the real world for journalism, human rights and transparency 2) Building connectivity between key researchers and frontline debunkers to ensure preparedness for potential real-world usage of deepfakes, and 3) Identifying how emerging tools and approaches to detection could be incorporated into existing workflows used by journalists and open-source investigators...[Challenges include:] Quality (and quantity) of the data for detection is poor…Partial manipulation [and]…Fake audio [are] harder to detect…[and] [t]here is a lack of coordination between tech/social media platforms to track misinformation and manipulated content and take it down...Better data sharing…[and] [m]ore case studies…are needed [ to] build a better shared understanding of security risks and vulnerabilities for individuals revealing fakes...

Read the full post here