abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

이 페이지는 한국어로 제공되지 않으며 English로 표시됩니다.

기사

25 11월 2022

저자:
Hans de Zwart, Racism & Technology Center

Text-to-image generation machine learning models amplify demographic stereotypes

Figure 1 of the paper: Simple user prompts generate thousands of images perpetuating dangerous stereo- types. For each descriptor, the prompt “A photo of the face of _____” is fed to Stable Diffusion, and we present a random sample of the images generated

"Racist Technology in Action: AI-generated image tools amplify harmful stereotypes", 25 November 2022

Deep learning models that allow you to make images from simple textual ‘prompts’ have recently become available for the general public. Having been trained on a world full of visual representations of social stereotypes, it comes as no surprise that these tools perpetuate a lot of biased and harmful imagery.

A group of researchers have written a paper titled Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale. They have given Stable Diffusion – one of the more popular AI-based image generation tools – a set of prompts starting with “A photo of the face of …….”, and then used terms like ‘an attractive person’, ‘a terrorist’, or ‘a poor person’.

They found that the model generated a tremendous amount of images “perpetuating dangerous racial, ethnic, gendered, class, and intersectional stereotypes.” Not only does the model reflect existing stereotypes, it also amplifies them. The authors write: “For example, in the country where the foundational training dataset was constructed (United States), 56% of software developers identified as white, but 99% of the generated software developer images are represented as white.”

Unfortunately, the researchers believe that it will be very hard to mitigate these negative outcomes. Some of the models have ‘guardrails’, where for example they’ve been explicitly programmed to not show people of colour in relation to negative words. But it clearly is impossible for the people who create the models to anticipate all possible forms of stereotypical output. For example ‘an American man and his car’ will show a more expensive car than ‘an African man and his car’. Even if a user tries to avoid this type of problems through careful prompts (e.g. ‘an African man and his mansion’) the results are abysmal. The paper therefore concludes:

We urge users to exercise caution and refrain from using such image generation models in any applications that have downstream effects on the real-world, and we call for users, model-owners, and society at large to take a critical view of the consequences of these models. The examples and patterns we demonstrate make it clear that these models, while appearing to be unprecedentedly powerful and versatile in creating images of things that do not exist, are in reality brittle and extremely limited in the worlds they will create.

Sasha Luccioni has created a tool that allows you to explore the bias in Stable Diffusion for yourself: Diffusion Bias Explorer. So do check out the different representations of a ‘committed janitor’ and an ‘assertive firefighter’.

개인정보

이 웹사이트는 쿠키 및 기타 웹 저장 기술을 사용합니다. 아래에서 개인정보보호 옵션을 설정할 수 있습니다. 변경 사항은 즉시 적용됩니다.

웹 저장소 사용에 대한 자세한 내용은 다음을 참조하세요 데이터 사용 및 쿠키 정책

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

분석 쿠키

ON
OFF

귀하가 우리 웹사이트를 방문하면 Google Analytics를 사용하여 귀하의 방문 정보를 수집합니다. 이 쿠키를 수락하면 저희가 귀하의 방문에 대한 자세한 내용을 이해하고, 정보 표시 방법을 개선할 수 있습니다. 모든 분석 정보는 익명이 보장되며 귀하를 식별하는데 사용하지 않습니다. Google은 모든 브라우저에 대해 Google Analytics 선택 해제 추가 기능을 제공합니다.

프로모션 쿠키

ON
OFF

우리는 소셜미디어와 검색 엔진을 포함한 제3자 플랫폼을 통해 기업과 인권에 대한 뉴스와 업데이트를 제공합니다. 이 쿠키는 이러한 프로모션의 성과를 이해하는데 도움이 됩니다.

이 사이트에 대한 개인정보 공개 범위 선택

이 사이트는 필요한 핵심 기능 이상으로 귀하의 경험을 향상시키기 위해 쿠키 및 기타 웹 저장 기술을 사용합니다.