abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

这页面没有简体中文版本,现以English显示

文章

12 六月 2024

作者:
Katie McQue, The Guardian

Experts warn of rising AI misuse by child predators to generate sexual images from old abuse material, affecting survivors

"Child predators are using AI to create sexual images of their favorite ‘stars’: ‘My body will never be mine again’", 12 June 2024

Predators active on the dark web are increasingly using artificial intelligence to create sexually explicit images of children, fixating especially on “star” victims, child safety experts warn.

Child safety groups tracking the activity of predators chatting in dark web forums say they are increasingly finding conversations about creating new images based on older child sexual abuse material (CSAM). Many of these predators using AI obsess over child victims referred to as “stars” in predator communities for the popularity of their images.

“The communities of people who trade this material get infatuated with individual children,” said Sarah Gardner, chief executive officer of the Heat Initiative, a Los Angeles non-profit focused on child protection. “They want more content of those children, which AI has now allowed them to do.”

These abuse survivors may now be grown adults, but AI has exacerbated the prospect that more people may be viewing sexual content depicting them as children, according to experts and abuse survivors interviewed. They fear that images of them circulating the internet or their communities could threaten the lives and careers they’ve built since their abuse ended.

Megan, a survivor of CSAM, whose last name is being withheld because of past violent threats, says that the potential for AI to be used to manipulate her images has become an increasingly stressful prospect over the past 12 months, though her own abuse occurred a decade ago.

“AI gives perpetrators the chance to create even more situations of my abuse to feed their own fantasies and their own versions,” she said. “The way my images could be manipulated with AI could give the false impression it was not harmful or that I was enjoying the abuse.”

Since dark web browsers enable users to be anonymous or untraceable, child safety groups have few means of requesting these images be removed or reporting the users to law enforcement.

Advocates have called for legislation that goes beyond criminalization to prevent the production of CSAM, by AI and otherwise. They are pessimistic that not much can be done to enforce bans on the creation of new sexualized images of children though, now that AI enabling it has become open source and private. Encrypted messaging services, now often default options, allow predators to communicate undetected, say advocates.

Creating new CSAM and reviving old CSAM with AI

The Guardian has viewed several excerpts of these dark web chat room conversations, with the names of victims redacted for safeguarding. The discussions take an amiable tone, and forum members are encouraged to create new images with AI to share in the groups. Many said they were thrilled at the prospect of new material made with AI, others were uninterested because the images do not depict real abuse.

...

Data bears out the phenomenon of predators’ preoccupation with “stars”. In a 2020 assessment to the National Center for Missing and Exploited Children, Meta reported that just six videos accounted for half of all the child sexual abuse material being shared and re-shared on Facebook and Instagram. Roughly 90% of the abusive material Meta tracked in a two-month period was the same as previously reported content.

Real Hollywood celebrities are also potential targets for victimization with AI-generated CSAM. The Guardian reviewed chatroom threads on the dark web discussing desires for predators who are proficient in AI to create child abuse images of celebrities, including teen idols from the 1990s who are now adults.

How child sexual abuse material made by AI spreads

Predators’ use of AI became prevalent at the end of 2022, child safety experts said. The same year as OpenAI debuted ChatGPT, the LAION-5B database, an open-source catalogue of more than 5bn images that anyone can use to train AI models, was launched by an eponymous non-profit.

A Stanford University report released in December 2023 revealed that hundreds of known images of child sexual abuse had been included in LAION-5B and are now being used to train popular AI image generation models to generate CSAM. Though the images were a minor fraction of the whole database, they carry an outsize risk, experts said.

“As soon as these things were open sourced, that’s when the production of AI generative CSAM exploded,” said Dan Sexton, chief technology officer at the Internet Watch Foundation, a UK-based non-profit that focuses on preventing online child abuse.

The knowledge that real abuse images are used to train AI models has resulted in additional trauma for some survivors.

Experts say they’ve seen a shift towards predators using encrypted private messaging services such as WhatsApp, Signal and Telegram to spread and access CSAM. A great deal of CSAM is still shared outside of mainstream channels on the dark web, though. In an October 2023 report, the Internet Watch Foundation (IWF) says it found more than 20,000 AI-generated sexual images of children that were posted on just one forum on the dark web in a one-month period in September.

Over the last year, AI image generators have improved across the board, and their output has become increasingly realistic. Child safety experts said AI-generated still images are often indistinguishable from real-life photos.

What effect will AI-generated CSAM have?

Experts say the impact of AI-generated CSAM is only starting to come in focus. In certain circumstances, viewing CSAM online can cause a predator’s behavior to escalate to committing contact offences with children, and it remains to be seen how AI plays into that dynamic.

Some predators mistakenly believe that viewing AI-generated CSAM may be more ethical than “real life” material, experts said.

What can be done to curb AI-generated sexualized images of children?

In many countries, including the US and UK, decades-old laws already criminalize any CSAM created using AI via prohibitions on any indecent or obscene visual depictions of children. Pornographic depictions of Taylor Swift made by AI and circulated early this year prompted the introduction of legislation in the US that would regulate such deepfakes.

Child safety and tech experts interviewed were pessimistic on whether it is possible to prevent the production and distribution of AI-generated CSAM. They highlight that much of the production goes undetected by the authorities.

AI software is downloadable, which means these abusive and illegal activities can be taken offline.

“This means offenders can do it in the privacy of their own home, within the walls of their own network, therefore they’re not susceptible to getting caught doing this,” said Marcoux.

[Coverage of the allegations that images of child sexual abuse included in LAION-5B can be read here]

隐私资讯

本网站使用 cookie 和其他网络存储技术。您可以在下方设置您的隐私选项。您所作的更改将立即生效。

有关我们使用网络存储的更多信息,请参阅我们的 数据使用和 Cookie 政策

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

分析 cookie

ON
OFF

您浏览本网页时我们将以Google Analytics收集信息。接受此cookie将有助我们理解您的浏览资讯,并协助我们改善呈现资讯的方法。所有分析资讯都以匿名方式收集,我们并不能用相关资讯得到您的个人信息。谷歌在所有主要浏览器中都提供退出Google Analytics的添加应用程式。

市场营销cookies

ON
OFF

我们从第三方网站获得企业责任资讯,当中包括社交媒体和搜寻引擎。这些cookie协助我们理解相关浏览数据。

您在此网站上的隐私选项

本网站使用cookie和其他网络存储技术来增强您在必要核心功能之外的体验。