abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

Diese Seite ist nicht auf Deutsch verfügbar und wird angezeigt auf English

Artikel

5 Okt 2023

Autor:
Chris Stokel-Walker, Fast Company

Researchers find that generative AI tools allegedly have a 'US' bias

"AI image generators like DALL-E and Stable Diffusion have a representation problem" 5 October 2023

The United States accounts for less than 5% of the world’s population. English is spoken by just 17% of the globe. Yet type common words like “house” into an AI image generator like DALL-E or Stable Diffusion, and you’ll be presented with imagery of classic Americana.

That’s a problem, as a new academic paper, presented this week at the IEEE/CVF International Conference on Computer Vision in Paris, France, shows. Danish Pruthi and colleagues at the Indian Institute of Science at Bangalore India analyzed the output of two of the world’s most popular image generators, asking people from 27 different countries to say how representative they thought the images produced were of their environment.

The participants were shown AI-generated images in response to queries asking the tools to produce depictions of houses, flags, weddings, and cities, among others, then asked to rate them. Outside of the United States and India, most people felt that the AI tools outputted imagery that didn’t match their lived experience. Participants’ lack of connection seems understandable: Ask DALL-E or Stable Diffusion to show a flag, and it’ll generate the Stars and Stripes—of little relevance to people in Slovenia or South Africa, two countries where people were surveyed on their responses...

...In part, the problem is one that blights all AI: When it comes to the quality of model outputs, it largely depends on model inputs. And the input data isn’t always very good. ImageNet, one of the main databases of source images used for AI image generators, has long been criticized for racist and sexist labels on images...

...Neither OpenAI, the company behind DALL-E, nor Stability AI, which produces Stable Diffusion, responded to a request for comment...

Informationen zum Datenschutz

Diese Website verwendet Cookies und andere Web-Speichertechnologien. Sie können Ihre Datenschutzeinstellungen unten festlegen. Die Änderungen werden sofort wirksam.

Weitere Informationen über unsere Nutzung von Webspeicherung finden Sie in unserer Richtlinie zur Datennutzung und Cookies

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

Analytics-Cookie

ON
OFF

When you access our website we use Google Analytics to collect information on your visit. Accepting this cookie will allow us to understand more details about your journey, and improve how we surface information. All analytics information is anonymous and we do not use it to identify you. Google provides a Google Analytics opt-out add on for all popular browsers.

Promotional cookies

ON
OFF

We share news and updates on business and human rights through third party platforms, including social media and search engines. These cookies help us to understand the performance of these promotions.

Ihre Privatsphäre-Einstellungen für diese Website

Diese Website verwendet Cookies und andere Web-Speichertechnologien, um Ihre Erfahrung über die notwendigen Kernfunktionen hinaus zu verbessern.