abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

这页面没有简体中文版本,现以English显示

文章

5 十月 2023

作者:
Chris Stokel-Walker, Fast Company

Researchers find that generative AI tools allegedly have a 'US' bias

"AI image generators like DALL-E and Stable Diffusion have a representation problem" 5 October 2023

The United States accounts for less than 5% of the world’s population. English is spoken by just 17% of the globe. Yet type common words like “house” into an AI image generator like DALL-E or Stable Diffusion, and you’ll be presented with imagery of classic Americana.

That’s a problem, as a new academic paper, presented this week at the IEEE/CVF International Conference on Computer Vision in Paris, France, shows. Danish Pruthi and colleagues at the Indian Institute of Science at Bangalore India analyzed the output of two of the world’s most popular image generators, asking people from 27 different countries to say how representative they thought the images produced were of their environment.

The participants were shown AI-generated images in response to queries asking the tools to produce depictions of houses, flags, weddings, and cities, among others, then asked to rate them. Outside of the United States and India, most people felt that the AI tools outputted imagery that didn’t match their lived experience. Participants’ lack of connection seems understandable: Ask DALL-E or Stable Diffusion to show a flag, and it’ll generate the Stars and Stripes—of little relevance to people in Slovenia or South Africa, two countries where people were surveyed on their responses...

...In part, the problem is one that blights all AI: When it comes to the quality of model outputs, it largely depends on model inputs. And the input data isn’t always very good. ImageNet, one of the main databases of source images used for AI image generators, has long been criticized for racist and sexist labels on images...

...Neither OpenAI, the company behind DALL-E, nor Stability AI, which produces Stable Diffusion, responded to a request for comment...

隐私资讯

本网站使用 cookie 和其他网络存储技术。您可以在下方设置您的隐私选项。您所作的更改将立即生效。

有关我们使用网络存储的更多信息,请参阅我们的 数据使用和 Cookie 政策

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

分析 cookie

ON
OFF

您浏览本网页时我们将以Google Analytics收集信息。接受此cookie将有助我们理解您的浏览资讯,并协助我们改善呈现资讯的方法。所有分析资讯都以匿名方式收集,我们并不能用相关资讯得到您的个人信息。谷歌在所有主要浏览器中都提供退出Google Analytics的添加应用程式。

市场营销cookies

ON
OFF

我们从第三方网站获得企业责任资讯,当中包括社交媒体和搜寻引擎。这些cookie协助我们理解相关浏览数据。

您在此网站上的隐私选项

本网站使用cookie和其他网络存储技术来增强您在必要核心功能之外的体验。