abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

這頁面沒有繁體中文版本,現以English顯示

文章

7 九月 2023

作者:
Josh Butler, The Guardian

Australia: New code will require AI-made child abuse & terrorist material be removed from search results

"Search engines required to stamp out AI-generated images of child abuse under Australia’s new code", 7 September 2023

Artificial intelligence tools could be used to generate child abuse images and terrorist propaganda, Australia’s eSafety Commissioner has warned while announcing a world-leading industry standard that requires tech giants to stamp out such material on AI-powered search engines.

The new industry code covering search engines, to be detailed..., requires big tech firms like Google, Microsoft’s Bing and DuckDuckGo to eliminate child abuse material from their search results, and to take steps to ensure generative AI products can’t be used to generate deepfake versions of that material.

Julie Inman Grant, the eSafety Commissioner, said the companies themselves needed to be at the forefront of reducing the harms their products can create. “We are seeing ‘synthetic’ child abuse material come through,” she said. “Terror organisations are using generative AI to create propaganda. It’s already happening. It’s not a fanciful thing. We felt it needed to be covered.”

Microsoft and Google recently announced plans to integrate their AI tools ChatGPT and Bard respectively with their popular consumer search engines. Inman Grant said the progress of AI technology required a rethink of the “search code” covering those platforms.

The eSafety Commissioner said the previous version of the code only covered online material that search engines returned after queries, not material that these services could generate. The new code will require search engines to regularly review and improve their AI tools to ensure “class 1A” material – including child sexual exploitation, pro-terror and extreme violence material – is not returned in search results, including by delisting and blocking such search results.

The companies will also be required to research technologies which would help users detect and identify deepfake images accessible from their services. The eSafety Commission believes it is one of the first frameworks of its kind in the world.

Inman Grant said the new rules would compel tech companies to not only reduce harms on their platforms, but to work on building tools to promote greater safety, such as to detect deep fake images.

隱私資訊

本網站使用 cookie 和其他網絡存儲技術。您可以在下方設置您的隱私選項。您所作的更改將立即生效。

有關我們使用網絡儲存技術的更多資訊,請參閱我們的 數據使用和 Cookie 政策

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

分析cookie

ON
OFF

您瀏覽本網頁時我們將以Google Analytics收集信息。接受此cookie將有助我們理解您的瀏覽資訊,並協助我們改善呈現資訊的方法。所有分析資訊都以匿名方式收集,我們並不能用相關資訊得到您的個人信息。谷歌在所有主要瀏覽器中都提供退出Google Analytics的添加應用程式。

市場營銷cookies

ON
OFF

我們從第三方網站獲得企業責任資訊,當中包括社交媒體和搜尋引擎。這些cookie協助我們理解相關瀏覽數據。

您在此網站上的隱私選項

本網站使用 cookie 和其他網絡儲存技術來增強您在必要核心功能之外的體驗。