abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

這頁面沒有繁體中文版本,現以English顯示

文章

5 十月 2023

作者:
Chris Stokel-Walker, Fast Company

Researchers find that generative AI tools allegedly have a 'US' bias

"AI image generators like DALL-E and Stable Diffusion have a representation problem" 5 October 2023

The United States accounts for less than 5% of the world’s population. English is spoken by just 17% of the globe. Yet type common words like “house” into an AI image generator like DALL-E or Stable Diffusion, and you’ll be presented with imagery of classic Americana.

That’s a problem, as a new academic paper, presented this week at the IEEE/CVF International Conference on Computer Vision in Paris, France, shows. Danish Pruthi and colleagues at the Indian Institute of Science at Bangalore India analyzed the output of two of the world’s most popular image generators, asking people from 27 different countries to say how representative they thought the images produced were of their environment.

The participants were shown AI-generated images in response to queries asking the tools to produce depictions of houses, flags, weddings, and cities, among others, then asked to rate them. Outside of the United States and India, most people felt that the AI tools outputted imagery that didn’t match their lived experience. Participants’ lack of connection seems understandable: Ask DALL-E or Stable Diffusion to show a flag, and it’ll generate the Stars and Stripes—of little relevance to people in Slovenia or South Africa, two countries where people were surveyed on their responses...

...In part, the problem is one that blights all AI: When it comes to the quality of model outputs, it largely depends on model inputs. And the input data isn’t always very good. ImageNet, one of the main databases of source images used for AI image generators, has long been criticized for racist and sexist labels on images...

...Neither OpenAI, the company behind DALL-E, nor Stability AI, which produces Stable Diffusion, responded to a request for comment...

隱私資訊

本網站使用 cookie 和其他網絡存儲技術。您可以在下方設置您的隱私選項。您所作的更改將立即生效。

有關我們使用網絡儲存技術的更多資訊,請參閱我們的 數據使用和 Cookie 政策

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

分析cookie

ON
OFF

您瀏覽本網頁時我們將以Google Analytics收集信息。接受此cookie將有助我們理解您的瀏覽資訊,並協助我們改善呈現資訊的方法。所有分析資訊都以匿名方式收集,我們並不能用相關資訊得到您的個人信息。谷歌在所有主要瀏覽器中都提供退出Google Analytics的添加應用程式。

市場營銷cookies

ON
OFF

我們從第三方網站獲得企業責任資訊,當中包括社交媒體和搜尋引擎。這些cookie協助我們理解相關瀏覽數據。

您在此網站上的隱私選項

本網站使用 cookie 和其他網絡儲存技術來增強您在必要核心功能之外的體驗。