abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

這頁面沒有繁體中文版本,現以English顯示

內容有以下的語言版本: English, Português

文章

13 三月 2023

作者:
Zoe Schiffer & Casey Newton, The Verge,
作者:
Olhar Digital

Microsoft eliminates team dedicated to integrating AI principles into its product design

"Microsoft lays off team that taught employees how to make AI tools responsibly", 13 March 2023


Microsoft laid off its entire ethics and society team within the artificial intelligence organization as part of recent layoffs that affected 10,000 employees across the company, Platformer has learned. 

The move leaves Microsoft without a dedicated team to ensure its AI principles are closely tied to product design at a time when the company is leading the charge to make AI tools available to the mainstream, current and former employees said.

Microsoft still maintains an active Office of Responsible AI, which is tasked with creating rules and principles to govern the company’s AI initiatives. The company says its overall investment in responsibility work is increasing despite the recent layoffs.

But employees said the ethics and society team played a critical role in ensuring that the company’s responsible AI principles are actually reflected in the design of the products that ship.

More recently, the team has been working to identify risks posed by Microsoft’s adoption of OpenAI’s technology throughout its suite of products.

The ethics and society team was at its largest in 2020, when it had roughly 30 employees including engineers, designers, and philosophers. In October, the team was cut to roughly seven people as part of a reorganization. 

In a meeting with the team following the reorg, John Montgomery, corporate vice president of AI, told employees that company leaders had instructed them to move swiftly.

In response to questions, though, Montgomery said the team would not be eliminated.

Most members of the team were transferred elsewhere within Microsoft. Afterward, remaining ethics and society team members said that the smaller crew made it difficult to implement their ambitious plans.

The move leaves a foundational gap on the holistic design of AI products, one employee says

About five months later, on March 6th, remaining employees were told to join a Zoom call at 11:30AM PT to hear a “business critical update” from Montgomery. During the meeting, they were told that their team was being eliminated after all. 

One employee says the move leaves a foundational gap on the user experience and holistic design of AI products. “The worst thing is we’ve exposed the business to risk and human beings to risk in doing this,” they explained.

The conflict underscores an ongoing tension for tech giants that build divisions dedicated to making their products more socially responsible. At their best, they help product teams anticipate potential misuses of technology and fix any problems before they ship.

But they also have the job of saying “no” or “slow down” inside organizations that often don’t want to hear it — or spelling out risks that could lead to legal headaches for the company if surfaced in legal discovery. And the resulting friction sometimes boils over into public view.

Microsoft became focused on shipping AI tools more quickly than its rivals

Members of the ethics and society team said they generally tried to be supportive of product development. But they said that as Microsoft became focused on shipping AI tools more quickly than its rivals, the company’s leadership became less interested in the kind of long-term thinking that the team specialized in.

The elimination of the ethics and society team came just as the group’s remaining employees had trained their focus on arguably their biggest challenge yet: anticipating what would happen when Microsoft released tools powered by OpenAI to a global audience.

Last year, the team wrote a memo detailing brand risks associated with the Bing Image Creator, which uses OpenAI’s DALL-E system to create images based on text prompts. The image tool launched in a handful of countries in October, making it one of Microsoft’s first public collaborations with OpenAI.

While text-to-image technology has proved hugely popular, Microsoft researchers correctly predicted that it it could also threaten artists’ livelihoods by allowing anyone to easily copy their style.

隱私資訊

本網站使用 cookie 和其他網絡存儲技術。您可以在下方設置您的隱私選項。您所作的更改將立即生效。

有關我們使用網絡儲存技術的更多資訊,請參閱我們的 數據使用和 Cookie 政策

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

分析cookie

ON
OFF

您瀏覽本網頁時我們將以Google Analytics收集信息。接受此cookie將有助我們理解您的瀏覽資訊,並協助我們改善呈現資訊的方法。所有分析資訊都以匿名方式收集,我們並不能用相關資訊得到您的個人信息。谷歌在所有主要瀏覽器中都提供退出Google Analytics的添加應用程式。

市場營銷cookies

ON
OFF

我們從第三方網站獲得企業責任資訊,當中包括社交媒體和搜尋引擎。這些cookie協助我們理解相關瀏覽數據。

您在此網站上的隱私選項

本網站使用 cookie 和其他網絡儲存技術來增強您在必要核心功能之外的體驗。