abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

Diese Seite ist nicht auf Deutsch verfügbar und wird angezeigt auf English

Artikel

20 Jun 2023

Autor:
Billy Perrigo, TIME Magazine

OpenAI allegedly lobbied EU to avoid that its AI-powered tools be considered "high-risk"

"Exclusive: OpenAI Lobbied the E.U. to Water Down AI Regulation", 20 June 2023

The CEO of OpenAI, Sam Altman, has spent the last month touring world capitals where, at talks to sold-out crowds and in meetings with heads of governments, he has repeatedly spoken of the need for global AI regulation.

But behind the scenes, OpenAI has lobbied for significant elements of the most comprehensive AI legislation in the world—the E.U.’s AI Act—to be watered down in ways that would reduce the regulatory burden on the company, according to documents about OpenAI’s engagement with E.U. officials obtained by TIME from the European Commission via freedom of information requests.

In several cases, OpenAI proposed amendments that were later made to the final text of the E.U. law—which was approved by the European Parliament on June 14, and will now proceed to a final round of negotiations before being finalized as soon as January.

In 2022, OpenAI repeatedly argued to European officials that the forthcoming AI Act should not consider its general purpose AI systems—including GPT-3, the precursor to ChatGPT, and the image generator Dall-E 2—to be “high risk,” a designation that would subject them to stringent legal requirements including transparency, traceability, and human oversight.

That argument brought OpenAI in line with Microsoft, which has invested $13 billion into the AI lab, and Google, both of which have previously lobbied E.U. officials in favor of loosening the Act’s regulatory burden on large AI providers...

...In a statement to TIME, an OpenAI spokesperson said: “At the request of policymakers in the E.U., in September 2022 we provided an overview of our approach to deploying systems like GPT-3 safely, and commented on the then-draft of the [AI Act] based on that experience. Since then, the [AI Act] has evolved substantially and we’ve spoken publicly about the technology’s advancing capabilities and adoption. We continue to engage with policymakers and support the E.U.’s goal of ensuring AI tools are built, deployed and used safely now and in the future.”..

Zeitleiste