ChatGPT faces legal complaint after a user inputted their own name and found it accused them of made-up crimes

OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen are seen in this illustration photo taken in Krakow, Poland on December 5, 2022.
(Image credit: Jakub Porzycki/NurPhoto via Getty Images)

AI ‘hallucinations’ are a well-documented phenomenon. As Large Language Models are only making their best guess about which word is most likely to come next and don’t understand things like context, they’re prone to simply making stuff up. Between fake cheese facts and stomach-turning medical advice, disinformation like this may be funny, but is far from harmless. Now, there may actually be legal recourse.

A Norwegian man called Arve Hjalmar Holmen recently struck up a conversation with ChatGPT to see what information OpenAI’s chatbot would offer when he typed in his own name. He was horrified when ChatGPT allegedly spun a yarn falsely claiming he’d killed his own sons and been sentenced to 21 years in prison (via TechCrunch). The creepiest aspect? Around the story of the made up crime, ChatGPT included some accurate, identifiable details about Holman’s personal life, such as the number and gender of his children, as well as the name of his home town.

The privacy rights advocacy group Noyb soon got involved. The organisation told TechCrunch they carried out their own investigation as to why ChatGPT could be outputting these claims, checking to see if perhaps someone with a similar name had committed serious crimes. Ultimately, they could not find anything substantial along these lines, so the 'why' behind ChatGPT's hair-raising output remains unclear.

The chatbot's underlying AI model has since been updated, and it now no longer repeats the defamatory claims. However Noyb, having previously filed complaints on the grounds of ChatGPT outputting inaccurate information about public figures, was not satisfied to close the book here. The organisation has now filed a complaint with Datatilsynet (the Norwegian Data Protection Authority) on the grounds that ChatGPT violated GDPR.

Under Article 5(1)(d) of the EU law, companies processing personal data have to ensure that it’s accurate–and if it’s not accurate, it must either be corrected or deleted. Noyb makes the case that, just because ChatGPT has stopped falsely accusing Holmen of being a murderer, that doesn’t mean the data has been deleted.

The ChatGPT interface. A user has typed the words "ChatGPT can make mistakes. Consider checking important information." This is the same disclaimer also seen at the bottom of the session window. ChatGPT has output, "You're absolutely right! While I strive to provide accurate information, I can still make mistakes. It's always a good idea to double-check critical details, especially when it comes to important decisions or complex topics. If you ever spot something that doesn't seem quite right, feel free to ask for clarification or let me know!"

(Image credit: Future)

Noyb wrote, “The incorrect data may still remain part of the LLM’s dataset. By default, ChatGPT feeds user data back into the system for training purposes. This means there is no way for the individual to be absolutely sure that this output can be completely erased [...] unless the entire AI model is retrained.”

Noyb also alleges that, by its nature, ChatGPT does not comply with Article 15 of GDPR. Simply put, there’s no guarantee that you can call back whatever you feed into ChatGPT–or see whatever data about you has been fed into its dataset. On this point, Noyb shares, “This fact understandably still causes distress and fear for the complainant, [Holmen].”

At present, Noyb are requesting that Datatilsynet order OpenAI to delete the inaccurate data about Holmen, and that the company ensures ChatGPT can’t hallucinate another horror story about someone else. Given OpenAI’s current approach is merely displaying the disclaimer "ChatGPT can make mistakes. Consider checking important information," in tiny font at the bottom of each user session, this is perhaps a tall order.

Still, I’m glad to see Noyb apply legal pressure to OpenAI, especially as the US government has seemingly thrown caution to the wind and gone all in on AI with the ‘Stargate’ infrastructure plan. When ChatGPT can easily output defamatory claims right alongside accurate, identifying information, a crumb of caution feels like less than the bare minimum.

Best gaming PCBest gaming laptop


Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

TOPICS
Jess Kinghorn
Hardware Writer

Jess has been writing about games for over ten years, spending the last seven working on print publications PLAY and Official PlayStation Magazine. When she’s not writing about all things hardware here, she’s getting cosy with a horror classic, ranting about a cult hit to a captive audience, or tinkering with some tabletop nonsense.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.

Read more
SUQIAN, CHINA - JANUARY 27, 2025 - An illustration photo shows the logo of DeepSeek and ChatGPT in Suqian, Jiangsu province, China, January 27, 2025. (Photo credit should read CFOTO/Future Publishing via Getty Images)
The brass balls on these guys: OpenAI complains that DeepSeek has been using its data, you know, the copyrighted data it's been scraping from everywhere
A conceptual image illustrating strategy and risk with a white mouse hanging mid-air in a harness, wearing a communication headset with earpiece and microphone being lowered towards a primed mousetrap load with Swiss cheese on a tiled floor. Light From a slightly ajar door illuminates the scene.
Google's AI made up a fake cheese fact that wound up in an ad for Google's AI, perfectly highlighting why relying on AI is a bad idea
The OpenAI logo is being displayed on a smartphone with an AI brain visible in the background, in this photo illustration taken in Brussels, Belgium, on January 2, 2024. (Photo illustration by Jonathan Raa/NurPhoto via Getty Images)
OpenAI is working on a new AI model Sam Altman says is ‘good at creative writing’ but to me it reads like a 15-year-old's journal
A digitally generated image of abstract AI chat speech bubbles overlaying a blue digital surface.
We need a better name for AI, or we risk talking past each other until actually intelligent AGI comes home mooing
OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen are seen in this illustration photo taken in Krakow, Poland on December 5, 2022.
If you don't let us scrape copyrighted content, we will lose out to China says OpenAI as it tries to influence US government
Ryan Gosling in Blade Runner: 2049, his face cut up and with a bandage over his nose, bathed in purple light with the blackground a blurry blue
Coder creates an 'infinite maze' to snare AI bots in an act of 'sheer unadulterated rage at how things are going' on the content-scraped web
Latest in AI
OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen are seen in this illustration photo taken in Krakow, Poland on December 5, 2022.
ChatGPT faces legal complaint after a user inputted their own name and found it accused them of made-up crimes
Public Eye trailer still - dead-eyed police officer sitting for an interview
I'm creeped out by this trailer for a generative AI game about people using an AI-powered app to solve violent crimes in the year 2028 that somehow isn't a cautionary tale
Closeup of the new Copilot key coming to Windows 11 PC keyboards
Microsoft co-authored paper suggests the regular use of gen-AI can leave users with a 'diminished skill for independent problem-solving' and at least one AI model seems to agree
Still image of Bastion holding a bird, taken from Microsoft's Copilot for Gaming reveal trailer
Microsoft unveils Copilot for Gaming, an AI-powered 'ultimate gaming sidekick' that will let you talk to your console so you don't have to talk to your friends
BURBANK, CALIFORNIA - AUGUST 15: Protestors attend the SAG-AFTRA Video Game Strike Picket on August 15, 2024 in Burbank, California. (Photo by Lila Seeley/Getty Images)
8 months into their strike, videogame voice actors say the industry's latest proposal is 'filled with alarming loopholes that will leave our members vulnerable to AI abuse'
live action Jimbo the Jester from Balatro holding a playing card and addressing the camera
LocalThunk forbids AI-generated art on the Balatro subreddit: 'I think it does real harm to artists of all kinds'
Latest in News
OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen are seen in this illustration photo taken in Krakow, Poland on December 5, 2022.
ChatGPT faces legal complaint after a user inputted their own name and found it accused them of made-up crimes
Nvidia CEO Jensen Huang delivering pancakes and sausages to pre-GTC show hosts and guests, wearing an apron
'There might be a party. I wasn't invited,' says Jensen Huang of the rumoured TSMC proposal to join forces and run Intel's chip fabs
Endless Legend 2 Kin faction reveal
It's turtle time: Endless Legend 2's first faction is the fortification-loving Kin of Sheredyn
live action Jimbo the Jester from Balatro holding a playing card and addressing the camera
Balatro's first demo could be edited with Notepad to unlock the whole game—the solution? 'Bury it as soon as possible' with a 'newer, shinier version'
A massive beachhead assault in indie RTS Beyond All Reason
Over 110 players and 10,000 units clash as this free RTS celebrates its growing multiplayer scene with some of the biggest multiplayer battles ever fought
A group of bandits sweep into a tavern to viciously interrogate its subjects in the D&D 2024 monster manual.
'Hasbro pushed Sigil out of the nest': D&D's latest layoffs happened because the 'distinct monetization path' for its virtual tabletop Sigil never materialized