abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

这页面没有简体中文版本,现以English显示

内容有以下的语言版本: English, Português

文章

13 三月 2023

作者:
Zoe Schiffer & Casey Newton, The Verge,
作者:
Olhar Digital

Microsoft eliminates team dedicated to integrating AI principles into its product design

"Microsoft lays off team that taught employees how to make AI tools responsibly", 13 March 2023


Microsoft laid off its entire ethics and society team within the artificial intelligence organization as part of recent layoffs that affected 10,000 employees across the company, Platformer has learned. 

The move leaves Microsoft without a dedicated team to ensure its AI principles are closely tied to product design at a time when the company is leading the charge to make AI tools available to the mainstream, current and former employees said.

Microsoft still maintains an active Office of Responsible AI, which is tasked with creating rules and principles to govern the company’s AI initiatives. The company says its overall investment in responsibility work is increasing despite the recent layoffs.

But employees said the ethics and society team played a critical role in ensuring that the company’s responsible AI principles are actually reflected in the design of the products that ship.

More recently, the team has been working to identify risks posed by Microsoft’s adoption of OpenAI’s technology throughout its suite of products.

The ethics and society team was at its largest in 2020, when it had roughly 30 employees including engineers, designers, and philosophers. In October, the team was cut to roughly seven people as part of a reorganization. 

In a meeting with the team following the reorg, John Montgomery, corporate vice president of AI, told employees that company leaders had instructed them to move swiftly.

In response to questions, though, Montgomery said the team would not be eliminated.

Most members of the team were transferred elsewhere within Microsoft. Afterward, remaining ethics and society team members said that the smaller crew made it difficult to implement their ambitious plans.

The move leaves a foundational gap on the holistic design of AI products, one employee says

About five months later, on March 6th, remaining employees were told to join a Zoom call at 11:30AM PT to hear a “business critical update” from Montgomery. During the meeting, they were told that their team was being eliminated after all. 

One employee says the move leaves a foundational gap on the user experience and holistic design of AI products. “The worst thing is we’ve exposed the business to risk and human beings to risk in doing this,” they explained.

The conflict underscores an ongoing tension for tech giants that build divisions dedicated to making their products more socially responsible. At their best, they help product teams anticipate potential misuses of technology and fix any problems before they ship.

But they also have the job of saying “no” or “slow down” inside organizations that often don’t want to hear it — or spelling out risks that could lead to legal headaches for the company if surfaced in legal discovery. And the resulting friction sometimes boils over into public view.

Microsoft became focused on shipping AI tools more quickly than its rivals

Members of the ethics and society team said they generally tried to be supportive of product development. But they said that as Microsoft became focused on shipping AI tools more quickly than its rivals, the company’s leadership became less interested in the kind of long-term thinking that the team specialized in.

The elimination of the ethics and society team came just as the group’s remaining employees had trained their focus on arguably their biggest challenge yet: anticipating what would happen when Microsoft released tools powered by OpenAI to a global audience.

Last year, the team wrote a memo detailing brand risks associated with the Bing Image Creator, which uses OpenAI’s DALL-E system to create images based on text prompts. The image tool launched in a handful of countries in October, making it one of Microsoft’s first public collaborations with OpenAI.

While text-to-image technology has proved hugely popular, Microsoft researchers correctly predicted that it it could also threaten artists’ livelihoods by allowing anyone to easily copy their style.

隐私资讯

本网站使用 cookie 和其他网络存储技术。您可以在下方设置您的隐私选项。您所作的更改将立即生效。

有关我们使用网络存储的更多信息,请参阅我们的 数据使用和 Cookie 政策

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

分析 cookie

ON
OFF

您浏览本网页时我们将以Google Analytics收集信息。接受此cookie将有助我们理解您的浏览资讯,并协助我们改善呈现资讯的方法。所有分析资讯都以匿名方式收集,我们并不能用相关资讯得到您的个人信息。谷歌在所有主要浏览器中都提供退出Google Analytics的添加应用程式。

市场营销cookies

ON
OFF

我们从第三方网站获得企业责任资讯,当中包括社交媒体和搜寻引擎。这些cookie协助我们理解相关浏览数据。

您在此网站上的隐私选项

本网站使用cookie和其他网络存储技术来增强您在必要核心功能之外的体验。