abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

這頁面沒有繁體中文版本,現以English顯示

文章

25 十一月 2022

作者:
Hans de Zwart, Racism & Technology Center

Text-to-image generation machine learning models amplify demographic stereotypes

Figure 1 of the paper: Simple user prompts generate thousands of images perpetuating dangerous stereo- types. For each descriptor, the prompt “A photo of the face of _____” is fed to Stable Diffusion, and we present a random sample of the images generated

"Racist Technology in Action: AI-generated image tools amplify harmful stereotypes", 25 November 2022

Deep learning models that allow you to make images from simple textual ‘prompts’ have recently become available for the general public. Having been trained on a world full of visual representations of social stereotypes, it comes as no surprise that these tools perpetuate a lot of biased and harmful imagery.

A group of researchers have written a paper titled Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale. They have given Stable Diffusion – one of the more popular AI-based image generation tools – a set of prompts starting with “A photo of the face of …….”, and then used terms like ‘an attractive person’, ‘a terrorist’, or ‘a poor person’.

They found that the model generated a tremendous amount of images “perpetuating dangerous racial, ethnic, gendered, class, and intersectional stereotypes.” Not only does the model reflect existing stereotypes, it also amplifies them. The authors write: “For example, in the country where the foundational training dataset was constructed (United States), 56% of software developers identified as white, but 99% of the generated software developer images are represented as white.”

Unfortunately, the researchers believe that it will be very hard to mitigate these negative outcomes. Some of the models have ‘guardrails’, where for example they’ve been explicitly programmed to not show people of colour in relation to negative words. But it clearly is impossible for the people who create the models to anticipate all possible forms of stereotypical output. For example ‘an American man and his car’ will show a more expensive car than ‘an African man and his car’. Even if a user tries to avoid this type of problems through careful prompts (e.g. ‘an African man and his mansion’) the results are abysmal. The paper therefore concludes:

We urge users to exercise caution and refrain from using such image generation models in any applications that have downstream effects on the real-world, and we call for users, model-owners, and society at large to take a critical view of the consequences of these models. The examples and patterns we demonstrate make it clear that these models, while appearing to be unprecedentedly powerful and versatile in creating images of things that do not exist, are in reality brittle and extremely limited in the worlds they will create.

Sasha Luccioni has created a tool that allows you to explore the bias in Stable Diffusion for yourself: Diffusion Bias Explorer. So do check out the different representations of a ‘committed janitor’ and an ‘assertive firefighter’.

隱私資訊

本網站使用 cookie 和其他網絡存儲技術。您可以在下方設置您的隱私選項。您所作的更改將立即生效。

有關我們使用網絡儲存技術的更多資訊,請參閱我們的 數據使用和 Cookie 政策

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

分析cookie

ON
OFF

您瀏覽本網頁時我們將以Google Analytics收集信息。接受此cookie將有助我們理解您的瀏覽資訊,並協助我們改善呈現資訊的方法。所有分析資訊都以匿名方式收集,我們並不能用相關資訊得到您的個人信息。谷歌在所有主要瀏覽器中都提供退出Google Analytics的添加應用程式。

市場營銷cookies

ON
OFF

我們從第三方網站獲得企業責任資訊,當中包括社交媒體和搜尋引擎。這些cookie協助我們理解相關瀏覽數據。

您在此網站上的隱私選項

本網站使用 cookie 和其他網絡儲存技術來增強您在必要核心功能之外的體驗。