Generative AI-powered threats and harassment are becoming more traumatising, with many women targeted
"A.I. Is Making Death Threats Way More Realistic" 31 October 2025
Even though she was toughened by years spent working in internet activism, Caitlin Roper found herself traumatized by the online threats she received this year.
There was the picture of herself hanging from a noose, dead. And another of herself ablaze, screaming.
The posts were part of a surge of vitriol directed at Ms. Roper and her colleagues at Collective Shout, an Australian activist group, on X and other social media platforms. Some of it, including images of the women flayed, decapitated or fed into a wood chipper, was seemingly enabled — and given a visceral realism — by generative artificial intelligence. In some of the images, Ms. Roper was wearing a blue floral dress that she does, in fact, own...
...Artificial intelligence is already raising concerns for its ability to mimic real voices in service of scams or to produce deepfake pornography without a subject’s permission. Now, the technology is also being used for violent threats — priming them to maximize fear by making them far more personalized, more convincing and more easily delivered...
...Digitally generated threats have been possible for at least a few years. A judge in Florida was sent a video in 2023, most likely made using a character customization tool in the Grand Theft Auto 5 video game, that featured an avatar who looked and walked like her being hacked and shot to death...
...But threatening images are rapidly becoming easier to make, and more persuasive. One YouTube page had more than 40 realistic videos — most likely made using A.I., according to experts who reviewed the channel — each showing a woman being shot. (YouTube, after The New York Times contacted it, said it had terminated the channel for “multiple violations” of its guidelines.) A deepfake video of a student carrying a gun sent a high school into lockdown this spring. In July, a lawyer in Minneapolis said xAI’s Grok chatbot had provided an anonymous social media user with detailed instructions on breaking into his house, sexually assaulting him and disposing of his body...
...Worries about A.I.-assisted threats and extortion intensified with the introduction this month of Sora, a text-to-video app from OpenAI...The Times tested Sora and produced videos that appeared to show a gunman in a bloody classroom and a hooded man stalking a young girl. Grok also readily added a bloody gunshot wound to a photo of a real person...
...An OpenAI spokeswoman said the company relied on multiple defenses, including guardrails to block unsafe content from being created, experiments to uncover previously unknown weaknesses and automated content moderation systems...
...Neither X nor xAI, the company that owns Grok, responded to requests for comment...