abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb
Article

11 Jul 2023

Author:
Pat Brans, ComputerWeekly.com

Norwegian Consumer Council warns of generative AI threats, presents principles & recommendations to protect human rights

"Norwegian data privacy experts sound alarm over generative AI", 11 July 2023

Generative artificial intelligence (AI) recently burst onto the scene, producing text, images, sound, and video content that closely resemble human-made content. After being trained on publicly available data, ChatGPT, DALL-E, Bard and other AI models were unleashed to an eager public. The rate of adoption of this technology far surpasses the speed with which legislative bodies can pass the laws needed to ensure safety, reliability and fairness. 

Norwegians are trying to get ahead of the game, raising questions about consumer protection and data privacy. The Norwegian Consumer Council published a report in June 2023 to address the harm generative AI might inflict on consumers. The report, Ghost in the machine – addressing the consumer harms of generative AI, presents overarching principles that would ensure generative AI systems are developed and used in a way that protects human rights.

The Norwegian data protection authority, Datatilsynet, is also raising awareness about the ways generative AI violates the General Data Protection Regulation (GDPR). Generative AI models train on large amounts of data taken from many different sources, usually without the knowledge or consent of the originator of the data.

Data privacy may be violated during the training phase

Most of the models used for generative AI are foundational models, which means they are general purpose enough to be used by a variety of applications. The people who train the foundational models compile massive amounts of data from open sources on the internet, including a huge quantity of personal data.

The first concern raised by data protection authorities is whether organisations are entitled to collect all that personal data. Many data privacy experts think the data collection is unlawful. 

Once a model is trained, the data is no longer needed. Many organisations think that since the data is not needed, they can delete it to make all issues around data privacy go away. But that thinking has now been challenged. A new type of attack, called model inversion attacks, involves making certain kinds of queries to an AI model to re-identify the data that went into training the model. Some of these attacks specifically target generative AI.

Data rectification and erasure become very complicated

Another problem is that since models are trained with personal data, if a data protection authority orders an organisation to erase some personal data, it may mean the model has to be erased, because the data has become an integral part of the model.

 The companies that trained the model can no longer change what the model generates. From a data protection point of view, it’s not clear whether the rights to erasure and rectification can be upheld. The only way to respect these rights may be to scrap the model and start afresh. The organisations that own the models will probably not want to do that because they invested a lot of resources into training the model in the first place.

An additional area of concern is that the queries users enter into this service – the questions they write – could be used for “service improvements”, meaning that the input is used for further training. The AI models could also collect input for targeted advertisement.

Generative AI models are designed to take existing material and present it in new ways. This means the models are inherently prone to reproducing existing biases and errors.

The Norwegian Consumer Council calls on EU institutions to resist lobbying pressure from big tech companies and make watertight laws to protect consumers. Their recent report says that more than just laws are needed – enforcement agencies need more resources to make sure the laws are followed.