abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

هذه الصفحة غير متوفرة باللغة العربية وهي معروضة باللغة English

المقال

5 أكتوبر 2023

الكاتب:
Chris Stokel-Walker, Fast Company

Researchers find that generative AI tools allegedly have a 'US' bias

"AI image generators like DALL-E and Stable Diffusion have a representation problem" 5 October 2023

The United States accounts for less than 5% of the world’s population. English is spoken by just 17% of the globe. Yet type common words like “house” into an AI image generator like DALL-E or Stable Diffusion, and you’ll be presented with imagery of classic Americana.

That’s a problem, as a new academic paper, presented this week at the IEEE/CVF International Conference on Computer Vision in Paris, France, shows. Danish Pruthi and colleagues at the Indian Institute of Science at Bangalore India analyzed the output of two of the world’s most popular image generators, asking people from 27 different countries to say how representative they thought the images produced were of their environment.

The participants were shown AI-generated images in response to queries asking the tools to produce depictions of houses, flags, weddings, and cities, among others, then asked to rate them. Outside of the United States and India, most people felt that the AI tools outputted imagery that didn’t match their lived experience. Participants’ lack of connection seems understandable: Ask DALL-E or Stable Diffusion to show a flag, and it’ll generate the Stars and Stripes—of little relevance to people in Slovenia or South Africa, two countries where people were surveyed on their responses...

...In part, the problem is one that blights all AI: When it comes to the quality of model outputs, it largely depends on model inputs. And the input data isn’t always very good. ImageNet, one of the main databases of source images used for AI image generators, has long been criticized for racist and sexist labels on images...

...Neither OpenAI, the company behind DALL-E, nor Stability AI, which produces Stable Diffusion, responded to a request for comment...

معلومات الخصوصية

هذا الموقع يستخدم ملفات تعريف الارتباط وتكنولوجيا التخزين الشبكي. يمكنك ضبط خيارات الخصوصية أدناه. تسري التغييرات فورًا.

للمزيد من المعلومات عن استخدامنا للتخزين الشبكي، انظر سياستنا في استخدام البيانات وملفات تعريف الارتباط

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

ملفات تعريف الارتباط التحليلية

ON
OFF

When you access our website we use Google Analytics to collect information on your visit. Accepting this cookie will allow us to understand more details about your journey, and improve how we surface information. All analytics information is anonymous and we do not use it to identify you. Google provides a Google Analytics opt-out add on for all popular browsers.

Promotional cookies

ON
OFF

We share news and updates on business and human rights through third party platforms, including social media and search engines. These cookies help us to understand the performance of these promotions.

خيارات الخصوصية على هذا الموقع

هذا الموقع يستخدم ملفات تعريف الارتباط وتكنولوجيا التخزين الشبكي لتحسين تجربتك لما يتجاوز الخصائص الرئيسية الضرورية.