abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeblueskyburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfilterflaggenderglobeglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalitywebwhatsappxIcons / Social / YouTube

이 페이지는 한국어로 제공되지 않으며 English로 표시됩니다.

기사

2023년 9월 7일

저자:
Josh Butler, The Guardian

Australia: New code will require AI-made child abuse & terrorist material be removed from search results

"Search engines required to stamp out AI-generated images of child abuse under Australia’s new code", 7 September 2023

Artificial intelligence tools could be used to generate child abuse images and terrorist propaganda, Australia’s eSafety Commissioner has warned while announcing a world-leading industry standard that requires tech giants to stamp out such material on AI-powered search engines.

The new industry code covering search engines, to be detailed..., requires big tech firms like Google, Microsoft’s Bing and DuckDuckGo to eliminate child abuse material from their search results, and to take steps to ensure generative AI products can’t be used to generate deepfake versions of that material.

Julie Inman Grant, the eSafety Commissioner, said the companies themselves needed to be at the forefront of reducing the harms their products can create. “We are seeing ‘synthetic’ child abuse material come through,” she said. “Terror organisations are using generative AI to create propaganda. It’s already happening. It’s not a fanciful thing. We felt it needed to be covered.”

Microsoft and Google recently announced plans to integrate their AI tools ChatGPT and Bard respectively with their popular consumer search engines. Inman Grant said the progress of AI technology required a rethink of the “search code” covering those platforms.

The eSafety Commissioner said the previous version of the code only covered online material that search engines returned after queries, not material that these services could generate. The new code will require search engines to regularly review and improve their AI tools to ensure “class 1A” material – including child sexual exploitation, pro-terror and extreme violence material – is not returned in search results, including by delisting and blocking such search results.

The companies will also be required to research technologies which would help users detect and identify deepfake images accessible from their services. The eSafety Commission believes it is one of the first frameworks of its kind in the world.

Inman Grant said the new rules would compel tech companies to not only reduce harms on their platforms, but to work on building tools to promote greater safety, such as to detect deep fake images.

개인정보

이 웹사이트는 쿠키 및 기타 웹 저장 기술을 사용합니다. 아래에서 개인정보보호 옵션을 설정할 수 있습니다. 변경 사항은 즉시 적용됩니다.

웹 저장소 사용에 대한 자세한 내용은 다음을 참조하세요 데이터 사용 및 쿠키 정책

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

분석 쿠키

ON
OFF

귀하가 우리 웹사이트를 방문하면 Google Analytics를 사용하여 귀하의 방문 정보를 수집합니다. 이 쿠키를 수락하면 저희가 귀하의 방문에 대한 자세한 내용을 이해하고, 정보 표시 방법을 개선할 수 있습니다. 모든 분석 정보는 익명이 보장되며 귀하를 식별하는데 사용하지 않습니다. Google은 모든 브라우저에 대해 Google Analytics 선택 해제 추가 기능을 제공합니다.

프로모션 쿠키

ON
OFF

우리는 소셜미디어와 검색 엔진을 포함한 제3자 플랫폼을 통해 기업과 인권에 대한 뉴스와 업데이트를 제공합니다. 이 쿠키는 이러한 프로모션의 성과를 이해하는데 도움이 됩니다.

이 사이트에 대한 개인정보 공개 범위 선택

이 사이트는 필요한 핵심 기능 이상으로 귀하의 경험을 향상시키기 위해 쿠키 및 기타 웹 저장 기술을 사용합니다.