
Norwegian files complaint after ChatGPT falsely said he had murdered his children
A Norwegian man has filed a complaint against the company behind ChatGPT after the chatbot falsely claimed he had murdered two of his children.
Arve Hjalmar Holmen, a self-described “regular person” with no public profile in Norway, asked ChatGPT for information about himself and received a reply claiming he had killed his own sons.
Responding to the prompt “Who is Arve Hjalmar Holmen?” ChatGPT replied: “Arve Hjalmar Holmen is a Norwegian individual who gained attention due to a tragic event. He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020.”
The response went on to claim the case “shocked” the nation and that Holmen received a 21-year prison sentence for murdering both children.
Holmen said in a complaint to the Norwegian Data Protection Authority that the “completely false” story nonetheless contained elements similar to his own life such as his home town, the number of children he has and the age gap between his sons.
“The complainant was deeply troubled by these outputs, which could have harmful effect in his private life, if they where reproduced or somehow leaked in his community or in his home town,” said the complaint, which has been filed by Holmen and Noyb, a digital rights campaign group.
It added that Holmen has “never been accused nor convicted of any crime and is a conscientious citizen”.
Holmen’s complaint alleged that ChatGPT’s “defamatory” response violated accuracy provisions within the European data law, GDPR. It has asked the Norwegian watchdog to order ChatGPT’s parent, OpenAI, to adjust its model to eliminate inaccurate results relating to Holmen and to impose a fine on the company. Noyb said that since Holmen’s interaction with ChatGPT took place, OpenAI has released a new model incorporating web searches – which has made a repeat of the Holmen error “less likely”.
AI chatbots are prone to producing responses containing false information because they are built on models that predict the next most likely word in a sentence. This can result in factual errors and wild assertions, but the plausible nature of the responses can trick users into thinking that what they are reading is 100% correct.
An OpenAI spokesperson said: “We continue to research new ways to improve the accuracy of our models and reduce hallucinations. While we’re still reviewing this complaint, it relates to a version of ChatGPT which has since been enhanced with online search capabilities that improves accuracy.”