abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

這頁面沒有繁體中文版本,現以English顯示

文章

13 九月 2023

作者:
Dan Milmo, The Guardian

CSO exposes that paedophiles use open-source AI to create child sexual abuse content

"Paedophiles using open source AI to create child sexual abuse content, says watchdog", 13 September 2023

Freely available artificial intelligence software is being used by paedophiles to create child sexual abuse material (CSAM), according to a safety watchdog, with offenders discussing how to manipulate photos of celebrity children or known victims to create new content.

The Internet Watch Foundation said online forums used by sex offenders were discussing using open source AI models to create fresh illegal material. The warning came as the chair of the government’s AI taskforce, Ian Hogarth, raised concerns about CSAM... as he told peers that open source models were being used to create “some of the most heinous things out there”.

Open source AI technology can be downloaded and adjusted by users, as opposed to closed model tools such as OpenAI’s Dall-E or Google’s Imagen whose underlying models – which underpin the creation of images – cannot be accessed or changed by members of the public.

Dan Sexton, chief technology officer at the Internet Watch Foundation, told the Guardian paedophile discussion forums on the dark web were discussing matters such as which open source models to use and how to achieve the most realistic images.

The discussions include using images of celebrity children, publicly available images of children or images of known child abuse victims to create new abuse content. “All of these ideas are concerns and we have seen discussions about them,” said Sexton.

According to forum discussions seen by the IWF, offenders start with a basic source image generating model that is trained on billions and billions of tagged images, enabling them to carry out the basics of image generation. This is then fine-tuned with CSAM images to produce a smaller model using low-rank adaptation, which lowers the amount of compute needed to produce the images.

Asked if the IWF, which searches for CSAM and coordinates its removal as well as operating a hotline for tipoffs, could be overwhelmed by AI-made material, Sexton said: “Child sexual abuse online is already, as we believe, a public health epidemic. So this is not going to make the problem any better. It’s only going to potentially make it worse.”

Law enforcement and child safety experts fear that photorealistic images of CSAM, which are illegal in the UK, will make it more difficult to identify and help real-life victims. They are also concerned that the sheer potential volume of such imagery could make it more widely consumed.

... the BBC reported that Stable Diffusion, an open source AI image generator, was being used to create abuse images from text prompts typed in by humans. Sexton said Stable Diffusion had been discussed in online offender communities.

Stability AI, the UK company behind Stable Diffusion, told the BBC it “prohibits any misuse for illegal or immoral purposes across our platforms, and our policies are clear that this includes CSAM”.

The IWF warned in June that AI-generated material was emerging online. It investigated 29 reports of webpages containing suspected AI-made material over a five-week period this summer and found that seven of them contained AI-generated CSAM material.

Andrew Rogoyski, of the Institute for People-Centred AI at the University of Surrey, said: “Open source AI is important to democratising AI, ensuring that this powerful technology isn’t controlled by a handful of very large corporates. The downside of making AI software freely available is that there are people will misuse the technology.”

However, he added that open source software could in turn provide a solution because it could be adapted.

A UK government spokesperson said AI-generated CSAM would be covered by the forthcoming online safety bill and social media platforms would be required to prevent it from appearing on platforms.

Speaking at a House of Lords communications and digital committee meetying on Tuesday, Hogarth said dealing with the issue of open source as opposed to closed source systems was a big challenge.

He said closed source systems had issues with a lack of transparency about their contents and their potential for damaging competition, while there were concerns about “irreversible proliferation” of open source models. Hogarth referred to concerns over CSAM generation and added that deployment of open source models could not be reversed.

隱私資訊

本網站使用 cookie 和其他網絡存儲技術。您可以在下方設置您的隱私選項。您所作的更改將立即生效。

有關我們使用網絡儲存技術的更多資訊,請參閱我們的 數據使用和 Cookie 政策

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

分析cookie

ON
OFF

您瀏覽本網頁時我們將以Google Analytics收集信息。接受此cookie將有助我們理解您的瀏覽資訊,並協助我們改善呈現資訊的方法。所有分析資訊都以匿名方式收集,我們並不能用相關資訊得到您的個人信息。谷歌在所有主要瀏覽器中都提供退出Google Analytics的添加應用程式。

市場營銷cookies

ON
OFF

我們從第三方網站獲得企業責任資訊,當中包括社交媒體和搜尋引擎。這些cookie協助我們理解相關瀏覽數據。

您在此網站上的隱私選項

本網站使用 cookie 和其他網絡儲存技術來增強您在必要核心功能之外的體驗。