Generative AI companies face allegations of prioritising profits over effective rightsholder-oriented guardrails; OpenAI responds
OpenAI, Canva
In 2025, BHRC researchers have logged a number of cases in our database regarding generative AI harms. One entry recounts OpenAI’s decision to block Martin Luther King Jr. content on Sora after his family raised concerns that AI-generated representations are misrepresenting the civil rights leader and reinforcing racial stereotypes. Reportedly, women are now on the receiving end of AI-generated threats and online harassment in much more visceral ways, often combining real photos, synthetic audio, and fabricated videos to create immersive, traumatizing death threats. In one case, a woman received AI generated images of herself hanging from a noose and another of herself screaming while on fire. Some women receive these AI-generated threats depicting themselves wearing clothes that they actually own. These instances amplify psychological harm and fear, the normalization of discrimination online, and ultimately create a chilling effect on participation in public discourse.
Most recently, reports from 404 Media allege that generative AI content is facilitating the dehumanization of immigrant communities, through the generation and algorithmic amplification of fabricated ICE raid videos. Tech companies, including those monetizing AI-generated content, are allegedly profiting from this exploitation—prioritizing profits over effective guardrails and algorithmically rewarding the circulation of fear, anger, and prejudice. By prioritizing engagement over truth, platforms have the power to normalize cruelty and obscure human suffering behind sensationalized, synthetic images.
The societal impact is profound. People struggle to distinguish fact from fiction, empathy and a sense of shared reality erode, social cohesion begins to fracture, and accountability for the facilitation of discrimination and violence is lost somewhere in the thousands of lines in the company Terms of Service. Meanwhile, communities targeted by AI-generated harassment face real-world consequences: stigmatization, threats, psychological trauma and sometimes attacks in real life. These examples are not isolated incidents, but rather they are indicators of a systemic risk where generative AI intersects with existing inequalities, misguided algorithmic incentives, and ineffective (or nonexistent) regulation addressing the human rights impacts of artificial intelligence tools.
According to the UN Guiding Principles, companies have a responsibility to identify risks linked to their business operations, and responses to allegations of harm should acknowledge both technical and societal realities. Companies developing and profiting from generative AI powered synthetic content should commit to robust, cross-platform enforcement of nondiscrimination and nonviolence, including watermarks and C2PA metadata that cannot be easily removed, and actively monitor misuse beyond their own apps. They should collaborate directly with other platforms for rapid (yet accountability oriented) takedowns of harmful and inciteful content, provide accessible detection tools for all of its users, especially journalists and human rights activists, and invest in effective grievance mechanisms for at-risk communities. Technology companies must recognize the direct psychological and broader societal impacts of AI-generated disinformation and dehumanization, and ensure that commitments of "responsible innovation" extend into real-world enforcement.
As Sam Gregory, Executive Director of WITNESS, has explained, there are concrete steps companies should be taking to address human rights risks and harms they are generating.
“1. 𝐄𝐩𝐢𝐬𝐭𝐞𝐦𝐢𝐜 𝐇𝐚𝐫𝐦 𝐍𝐨𝐭 𝐉𝐮𝐬𝐭 𝐈𝐧𝐝𝐢𝐯𝐢𝐝𝐮𝐚𝐥 𝐂𝐨𝐧𝐭𝐞𝐧𝐭 𝐑𝐢𝐬𝐤𝐬 The concern isn't just specific deceptive videos, but the cumulative erosion of society's ability to trust any visual evidence when high-quality synthetic video becomes ubiquitous. They need to articulate how they are thinking about collective harm to visual truth and trust, not just individualized misinformation or disinformation cases.
2. 𝐎𝐮𝐭-𝐨𝐟-𝐏𝐥𝐚𝐭𝐟𝐨𝐫𝐦 𝐑𝐞𝐚𝐥𝐢𝐭𝐲: 𝐖𝐡𝐞𝐫𝐞 𝐂𝐨𝐧𝐭𝐞𝐧𝐭 𝐀𝐜𝐭𝐮𝐚𝐥𝐥𝐲 𝐋𝐢𝐯𝐞𝐬 Safeguards designed for OpenAI's controlled environment fail when Sora content circulates on social platforms, messaging apps, and the broader web where provenance metadata is lost and watermarks are removed.
3. 𝐏𝐫𝐨𝐯𝐞𝐧𝐚𝐧𝐜𝐞 & 𝐖𝐚𝐭𝐞𝐫𝐦𝐚𝐫𝐤𝐢𝐧𝐠: 𝐓𝐡𝐞 𝐈𝐦𝐩𝐥𝐞𝐦𝐞𝐧𝐭𝐚𝐭𝐢𝐨𝐧 𝐆𝐚𝐩 C2PA metadata and watermarking are critical but currently ineffective due to easy removal, inconsistent cross-platform implementation, and premature claims about their reliability. There is insufficient senior leadership investment and resourcing into making this work.
4. 𝐃𝐞𝐭𝐞𝐜𝐭𝐢𝐨𝐧: 𝐍𝐨𝐭 𝐔𝐬𝐚𝐛𝐥𝐞 𝐨𝐧 𝐑𝐞𝐚𝐥-𝐖𝐨𝐫𝐥𝐝 𝐅𝐫𝐨𝐧𝐭𝐥𝐢𝐧𝐞𝐬 OpenAI has not supported frontline journalists and civil society organizations who need practical, accessible detection tools that work in under-resourced, real-world verification contexts.
5. 𝐋𝐢𝐤𝐞𝐧𝐞𝐬𝐬 𝐏𝐫𝐨𝐭𝐞𝐜𝐭𝐢𝐨𝐧: 𝐁𝐞𝐲𝐨𝐧𝐝 𝐈𝐧-𝐀𝐩𝐩 𝐂𝐨𝐧𝐭𝐫𝐨𝐥𝐬 In-app likeness controls are insufficient when ordinary people have no scalable way to detect or challenge misuse of their likeness across the open web.
6. 𝐀𝐈, 𝐏𝐫𝐨𝐯𝐞𝐧𝐚𝐧𝐜𝐞 𝐋𝐢𝐭𝐞𝐫𝐚𝐜𝐲 𝐚𝐧𝐝 𝐕𝐞𝐫𝐢𝐟𝐢𝐜𝐚𝐭𝐢𝐨𝐧 𝐈𝐧𝐯𝐞𝐬𝐭𝐦𝐞𝐧𝐭𝐬 Sora launched without preparing vulnerable communities globally with the media literacy and verification capacity needed to navigate a rapidly changing synthetic content environment.”
On 4 December 2025, the Business & Human Rights Centre invited OpenAI to respond to allegations that the company’s technology, Sora, is being used to generate “videos that take advantage of human suffering” and that it is problematic how “incredibly easy it is to hide a Sora watermark”. OpenAI responded, stating:
“AI-generated videos are created and shared across many different tools, so addressing deceptive content requires an ecosystem-wide effort. For our part, on the creation side, we take a layered approach: Sora includes visible, dynamic watermarks and C2PA provenance metadata on downloaded videos, and we maintain internal systems designed to determine whether a video originated with us. Our usage policies prohibit deceptive or misleading use, and we take action when we detect violations…”
Click here to see OpenAI's full response. Meta did not respond to a request for comment from 404 Media.