abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

這頁面沒有繁體中文版本,現以English顯示

文章

14 三月 2025

作者:
The Verge

USA: Google and OpenAI seek ending of copyright restrictions for AI training

“OpenAI and Google ask the government to let them train AI on content they don’t own”, 14 March 2025

OpenAI and Google are pushing the US government to allow their AI models to train on copyrighted material. Both companies outlined their stances in proposals published this week, with OpenAI arguing that applying fair use protections to AI “is a matter of national security.”

The proposals come in response to a request from the White House, which asked governments, industry groups, private sector organizations, and others for input on President Donald Trump’s “AI Action Plan.”...

In its comment, Open claims that allowing AI companies to access copyrighted content would help the US “avoid forfeiting” its lead in AI to China, while calling out the rise of DeepSeek.

“There’s little doubt that the PRC’s [People’s Republic of China] AI developers will enjoy unfettered access to data — including copyrighted data — that will improve their models,” OpenAI writes. “If the PRC’s developers have unfettered access to data and American companies are left without fair use access, the race for AI is effectively over.”

Google, unsurprisingly, agrees. The company’s response similarly states that copyright, privacy, and patents policies “can impede appropriate access to data necessary for training leading models.” It adds that fair use policies, along with text and data mining exceptions, have been “critical” to training AI on publicly available data....

Anthropic, the AI company behind the AI chatbot Claude, also submitted a proposal – but it doesn’t mention anything about copyrights. Instead, it asks the US government to develop a system to assess an AI model’s national security risks and to strengthen export controls on AI chips. Like Google and OpenAI, Anthropic also suggests that the US bolster its energy infrastructure to support the growth of AI…

隱私資訊

本網站使用 cookie 和其他網絡存儲技術。您可以在下方設置您的隱私選項。您所作的更改將立即生效。

有關我們使用網絡儲存技術的更多資訊,請參閱我們的 數據使用和 Cookie 政策

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

分析cookie

ON
OFF

您瀏覽本網頁時我們將以Google Analytics收集信息。接受此cookie將有助我們理解您的瀏覽資訊,並協助我們改善呈現資訊的方法。所有分析資訊都以匿名方式收集,我們並不能用相關資訊得到您的個人信息。谷歌在所有主要瀏覽器中都提供退出Google Analytics的添加應用程式。

市場營銷cookies

ON
OFF

我們從第三方網站獲得企業責任資訊,當中包括社交媒體和搜尋引擎。這些cookie協助我們理解相關瀏覽數據。

您在此網站上的隱私選項

本網站使用 cookie 和其他網絡儲存技術來增強您在必要核心功能之外的體驗。