abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

这页面没有简体中文版本,现以English显示

文章

13 九月 2023

作者:
Dan Milmo, The Guardian

CSO exposes that paedophiles use open-source AI to create child sexual abuse content

"Paedophiles using open source AI to create child sexual abuse content, says watchdog", 13 September 2023

Freely available artificial intelligence software is being used by paedophiles to create child sexual abuse material (CSAM), according to a safety watchdog, with offenders discussing how to manipulate photos of celebrity children or known victims to create new content.

The Internet Watch Foundation said online forums used by sex offenders were discussing using open source AI models to create fresh illegal material. The warning came as the chair of the government’s AI taskforce, Ian Hogarth, raised concerns about CSAM... as he told peers that open source models were being used to create “some of the most heinous things out there”.

Open source AI technology can be downloaded and adjusted by users, as opposed to closed model tools such as OpenAI’s Dall-E or Google’s Imagen whose underlying models – which underpin the creation of images – cannot be accessed or changed by members of the public.

Dan Sexton, chief technology officer at the Internet Watch Foundation, told the Guardian paedophile discussion forums on the dark web were discussing matters such as which open source models to use and how to achieve the most realistic images.

The discussions include using images of celebrity children, publicly available images of children or images of known child abuse victims to create new abuse content. “All of these ideas are concerns and we have seen discussions about them,” said Sexton.

According to forum discussions seen by the IWF, offenders start with a basic source image generating model that is trained on billions and billions of tagged images, enabling them to carry out the basics of image generation. This is then fine-tuned with CSAM images to produce a smaller model using low-rank adaptation, which lowers the amount of compute needed to produce the images.

Asked if the IWF, which searches for CSAM and coordinates its removal as well as operating a hotline for tipoffs, could be overwhelmed by AI-made material, Sexton said: “Child sexual abuse online is already, as we believe, a public health epidemic. So this is not going to make the problem any better. It’s only going to potentially make it worse.”

Law enforcement and child safety experts fear that photorealistic images of CSAM, which are illegal in the UK, will make it more difficult to identify and help real-life victims. They are also concerned that the sheer potential volume of such imagery could make it more widely consumed.

... the BBC reported that Stable Diffusion, an open source AI image generator, was being used to create abuse images from text prompts typed in by humans. Sexton said Stable Diffusion had been discussed in online offender communities.

Stability AI, the UK company behind Stable Diffusion, told the BBC it “prohibits any misuse for illegal or immoral purposes across our platforms, and our policies are clear that this includes CSAM”.

The IWF warned in June that AI-generated material was emerging online. It investigated 29 reports of webpages containing suspected AI-made material over a five-week period this summer and found that seven of them contained AI-generated CSAM material.

Andrew Rogoyski, of the Institute for People-Centred AI at the University of Surrey, said: “Open source AI is important to democratising AI, ensuring that this powerful technology isn’t controlled by a handful of very large corporates. The downside of making AI software freely available is that there are people will misuse the technology.”

However, he added that open source software could in turn provide a solution because it could be adapted.

A UK government spokesperson said AI-generated CSAM would be covered by the forthcoming online safety bill and social media platforms would be required to prevent it from appearing on platforms.

Speaking at a House of Lords communications and digital committee meetying on Tuesday, Hogarth said dealing with the issue of open source as opposed to closed source systems was a big challenge.

He said closed source systems had issues with a lack of transparency about their contents and their potential for damaging competition, while there were concerns about “irreversible proliferation” of open source models. Hogarth referred to concerns over CSAM generation and added that deployment of open source models could not be reversed.

隐私资讯

本网站使用 cookie 和其他网络存储技术。您可以在下方设置您的隐私选项。您所作的更改将立即生效。

有关我们使用网络存储的更多信息,请参阅我们的 数据使用和 Cookie 政策

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

分析 cookie

ON
OFF

您浏览本网页时我们将以Google Analytics收集信息。接受此cookie将有助我们理解您的浏览资讯,并协助我们改善呈现资讯的方法。所有分析资讯都以匿名方式收集,我们并不能用相关资讯得到您的个人信息。谷歌在所有主要浏览器中都提供退出Google Analytics的添加应用程式。

市场营销cookies

ON
OFF

我们从第三方网站获得企业责任资讯,当中包括社交媒体和搜寻引擎。这些cookie协助我们理解相关浏览数据。

您在此网站上的隐私选项

本网站使用cookie和其他网络存储技术来增强您在必要核心功能之外的体验。