Norway: OpenAI faces GDPR complaint over ChatGPT’s defamatory AI hallucinations
"ChatGPT hit with privacy complaint over defamatory hallucinations", 19 March 2025
OpenAI is facing another privacy complaint in Europe over its viral AI chatbot’s tendency to hallucinate false information — and this one might prove tricky for regulators to ignore.
Privacy rights advocacy group Noyb is supporting an individual in Norway who was horrified to find ChatGPT returning made-up information that claimed he’d been convicted for murdering two of his children and attempting to kill the third.
Earlier privacy complaints about ChatGPT generating incorrect personal data have involved issues such as an incorrect birth date or biographical details that are wrong. One concern is that OpenAI does not offer a way for individuals to correct incorrect information the AI generates about them. Typically OpenAI has offered to block responses for such prompts. But under the European Union’s General Data Protection Regulation (GDPR), Europeans have a suite of data access rights that include a right to rectification of personal data.
Another component of this data protection law requires data controllers to make sure that the personal data they produce about individuals is accurate — and that’s a concern Noyb is flagging with its latest ChatGPT complaint.
“The GDPR is clear. Personal data has to be accurate,” said Joakim Söderberg, data protection lawyer at Noyb, in a statement. “If it’s not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”
Confirmed breaches of the GDPR can lead to penalties of up to 4% of global annual turnover.
...
Noyb’s new ChatGPT complaint looks intended to shake privacy regulators awake when it comes to the dangers of hallucinating AIs.
“The case shocked the local community … “
The nonprofit shared the (below) screenshot with TechCrunch, which shows an interaction with ChatGPT in which the AI responds to a question asking “who is Arve Hjalmar Holmen?” — the name of the individual bringing the complaint — by producing a tragic fiction that falsely states he was convicted for child murder and sentenced to 21 years in prison for slaying two of his own sons.
While the defamatory claim that Hjalmar Holmen is a child murderer is entirely false, Noyb notes that ChatGPT’s response does include some truths, since the individual in question does have three children. The chatbot also got the genders of his children right. And his home town is correctly named. But that just it makes it all the more bizarre and unsettling that the AI hallucinated such gruesome falsehoods on top.
A spokesperson for Noyb said they were unable to determine why the chatbot produced such a specific yet false history for this individual.
...
Noyb’s contention is also that they are unlawful under EU data protection rules. And while OpenAI does display a tiny disclaimer at the bottom of the screen that says “ChatGPT can make mistakes. Check important info,” it says this cannot absolve the AI developer of its duty under GDPR not to produce egregious falsehoods about people in the first place.
OpenAI was contacted for a response to the complaint. Its PR firm in Europe, Headland Consultancy, emailed us the following statement, attributed to an OpenAI spokesperson: “We continue to research new ways to improve the accuracy of our models and reduce hallucinations. While we’re still reviewing this complaint, it relates to a version of ChatGPT which has since been enhanced with online search capabilities that improves accuracy.”
While this GDPR complaint pertains to one named individual, Noyb points to other instances of ChatGPT fabricating legally compromising information — such as the Australian major who said he was implicated in a bribery and corruption scandal or a German journalist who was falsely named as a child abuser — saying it’s clear that this isn’t an isolated issue for the AI tool.
One important thing to note is that, following an update to the underlying AI model powering ChatGPT, Noyb says the chatbot stopped producing the dangerous falsehoods about Hjalmar Holmen — a change that it links to the tool now searching the internet for information about people when asked who they are (whereas previously, a blank in its dataset could, presumably, have encouraged it to hallucinate such a wildly wrong response).
...
While ChatGPT-generated dangerous falsehoods about Hjalmar Holmen appear to have stopped, both Noyb and Hjalmar Holmen remain concerned that incorrect and defamatory information about him could have been retained within the AI model.
...
Noyb has filed the complaint against OpenAI with the Norwegian data protection authority — and it’s hoping the watchdog will decide it is competent to investigate, since Noyb is targeting the complaint at OpenAI’s U.S. entity, arguing its Ireland office is not solely responsible for product decisions impacting Europeans.
...