abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

이 페이지는 한국어로 제공되지 않으며 English로 표시됩니다.

기사

12 6월 2024

저자:
Katie McQue, The Guardian

Experts warn of rising AI misuse by child predators to generate sexual images from old abuse material, affecting survivors

"Child predators are using AI to create sexual images of their favorite ‘stars’: ‘My body will never be mine again’", 12 June 2024

Predators active on the dark web are increasingly using artificial intelligence to create sexually explicit images of children, fixating especially on “star” victims, child safety experts warn.

Child safety groups tracking the activity of predators chatting in dark web forums say they are increasingly finding conversations about creating new images based on older child sexual abuse material (CSAM). Many of these predators using AI obsess over child victims referred to as “stars” in predator communities for the popularity of their images.

“The communities of people who trade this material get infatuated with individual children,” said Sarah Gardner, chief executive officer of the Heat Initiative, a Los Angeles non-profit focused on child protection. “They want more content of those children, which AI has now allowed them to do.”

These abuse survivors may now be grown adults, but AI has exacerbated the prospect that more people may be viewing sexual content depicting them as children, according to experts and abuse survivors interviewed. They fear that images of them circulating the internet or their communities could threaten the lives and careers they’ve built since their abuse ended.

Megan, a survivor of CSAM, whose last name is being withheld because of past violent threats, says that the potential for AI to be used to manipulate her images has become an increasingly stressful prospect over the past 12 months, though her own abuse occurred a decade ago.

“AI gives perpetrators the chance to create even more situations of my abuse to feed their own fantasies and their own versions,” she said. “The way my images could be manipulated with AI could give the false impression it was not harmful or that I was enjoying the abuse.”

Since dark web browsers enable users to be anonymous or untraceable, child safety groups have few means of requesting these images be removed or reporting the users to law enforcement.

Advocates have called for legislation that goes beyond criminalization to prevent the production of CSAM, by AI and otherwise. They are pessimistic that not much can be done to enforce bans on the creation of new sexualized images of children though, now that AI enabling it has become open source and private. Encrypted messaging services, now often default options, allow predators to communicate undetected, say advocates.

Creating new CSAM and reviving old CSAM with AI

The Guardian has viewed several excerpts of these dark web chat room conversations, with the names of victims redacted for safeguarding. The discussions take an amiable tone, and forum members are encouraged to create new images with AI to share in the groups. Many said they were thrilled at the prospect of new material made with AI, others were uninterested because the images do not depict real abuse.

...

Data bears out the phenomenon of predators’ preoccupation with “stars”. In a 2020 assessment to the National Center for Missing and Exploited Children, Meta reported that just six videos accounted for half of all the child sexual abuse material being shared and re-shared on Facebook and Instagram. Roughly 90% of the abusive material Meta tracked in a two-month period was the same as previously reported content.

Real Hollywood celebrities are also potential targets for victimization with AI-generated CSAM. The Guardian reviewed chatroom threads on the dark web discussing desires for predators who are proficient in AI to create child abuse images of celebrities, including teen idols from the 1990s who are now adults.

How child sexual abuse material made by AI spreads

Predators’ use of AI became prevalent at the end of 2022, child safety experts said. The same year as OpenAI debuted ChatGPT, the LAION-5B database, an open-source catalogue of more than 5bn images that anyone can use to train AI models, was launched by an eponymous non-profit.

A Stanford University report released in December 2023 revealed that hundreds of known images of child sexual abuse had been included in LAION-5B and are now being used to train popular AI image generation models to generate CSAM. Though the images were a minor fraction of the whole database, they carry an outsize risk, experts said.

“As soon as these things were open sourced, that’s when the production of AI generative CSAM exploded,” said Dan Sexton, chief technology officer at the Internet Watch Foundation, a UK-based non-profit that focuses on preventing online child abuse.

The knowledge that real abuse images are used to train AI models has resulted in additional trauma for some survivors.

Experts say they’ve seen a shift towards predators using encrypted private messaging services such as WhatsApp, Signal and Telegram to spread and access CSAM. A great deal of CSAM is still shared outside of mainstream channels on the dark web, though. In an October 2023 report, the Internet Watch Foundation (IWF) says it found more than 20,000 AI-generated sexual images of children that were posted on just one forum on the dark web in a one-month period in September.

Over the last year, AI image generators have improved across the board, and their output has become increasingly realistic. Child safety experts said AI-generated still images are often indistinguishable from real-life photos.

What effect will AI-generated CSAM have?

Experts say the impact of AI-generated CSAM is only starting to come in focus. In certain circumstances, viewing CSAM online can cause a predator’s behavior to escalate to committing contact offences with children, and it remains to be seen how AI plays into that dynamic.

Some predators mistakenly believe that viewing AI-generated CSAM may be more ethical than “real life” material, experts said.

What can be done to curb AI-generated sexualized images of children?

In many countries, including the US and UK, decades-old laws already criminalize any CSAM created using AI via prohibitions on any indecent or obscene visual depictions of children. Pornographic depictions of Taylor Swift made by AI and circulated early this year prompted the introduction of legislation in the US that would regulate such deepfakes.

Child safety and tech experts interviewed were pessimistic on whether it is possible to prevent the production and distribution of AI-generated CSAM. They highlight that much of the production goes undetected by the authorities.

AI software is downloadable, which means these abusive and illegal activities can be taken offline.

“This means offenders can do it in the privacy of their own home, within the walls of their own network, therefore they’re not susceptible to getting caught doing this,” said Marcoux.

[Coverage of the allegations that images of child sexual abuse included in LAION-5B can be read here]

개인정보

이 웹사이트는 쿠키 및 기타 웹 저장 기술을 사용합니다. 아래에서 개인정보보호 옵션을 설정할 수 있습니다. 변경 사항은 즉시 적용됩니다.

웹 저장소 사용에 대한 자세한 내용은 다음을 참조하세요 데이터 사용 및 쿠키 정책

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

분석 쿠키

ON
OFF

귀하가 우리 웹사이트를 방문하면 Google Analytics를 사용하여 귀하의 방문 정보를 수집합니다. 이 쿠키를 수락하면 저희가 귀하의 방문에 대한 자세한 내용을 이해하고, 정보 표시 방법을 개선할 수 있습니다. 모든 분석 정보는 익명이 보장되며 귀하를 식별하는데 사용하지 않습니다. Google은 모든 브라우저에 대해 Google Analytics 선택 해제 추가 기능을 제공합니다.

프로모션 쿠키

ON
OFF

우리는 소셜미디어와 검색 엔진을 포함한 제3자 플랫폼을 통해 기업과 인권에 대한 뉴스와 업데이트를 제공합니다. 이 쿠키는 이러한 프로모션의 성과를 이해하는데 도움이 됩니다.

이 사이트에 대한 개인정보 공개 범위 선택

이 사이트는 필요한 핵심 기능 이상으로 귀하의 경험을 향상시키기 위해 쿠키 및 기타 웹 저장 기술을 사용합니다.