ChatGPT Generates Defamatory False Murder Information, OpenAI Faces Privacy Complaint in Europe

March 20, 2012 - Recently, artificial intelligence company OpenAI The popular chatbot of ChatGPT faces yet another privacy complaint for generating false information. This time the incident took place inEuropewhich was initiated by the privacy advocacy organization Noyb in support of a Norwegian individual.The individual discovered that ChatGPT had generated false information,Claims that he was convicted of murdering two of his children and attempting to kill a third childThe This incident has raised questions about whether OpenAI violated the EU's General Data Protection Regulation (GDPR) (GDPR) of the questioning.

ChatGPT Generates Defamatory False Murder Information, OpenAI Faces Privacy Complaint in Europe

1AI notes that previous complaints about ChatGPT's generation of erroneous personal information have typically involved things like incorrect dates of birth or inaccurate biographical details. However, the severity of this incident is that ChatGPT generated highly defamatory and false information, which not only causes serious damage to an individual's reputation, but also raises concerns about AI technology in terms of data accuracy.Under the GDPR, European citizens have the right to ask data controllers to correct incorrect personal information about themThe OpenAI is not currently providing a mechanism for individuals to correct the information generated by the AI about them. However, OpenAI does not currently provide a mechanism for individuals to correct the misinformation that the AI generates about them, often simply choosing to block responses to relevant prompts.

Noyb points out that the GDPR makes it clear that personal information must be accurate, and Joakim Söderberg, Noyb's data protection lawyer, says: "If information is inaccurate, users have the right to have it corrected to reflect the truth.Simply displaying a small-print statement at the bottom of ChatGPT's interface alerting the user that the chatbot may be in error is clearly insufficient. You can't spread false information while adding a small print statement at the end saying that everything you said may not be true."

GDPR violations could bring fines of up to 4% in annual global turnoverIn addition, enforcement actions may force AI products to adapt. In addition, enforcement actions may force AI products to adapt.

Noyb's new complaint against ChatGPT seems to be aimed at waking up privacy regulators to the dangers of "phantom" AI, and Noyb shared a screenshot with TechCrunch showing ChatGPT's response to the question "Who is Arve Who is Hjalmar Holmen?" The question generated a false and tragic story claiming that Holmen was sentenced to 21 years in prison for the murder of his two sons. While Holmen did have three children, and ChatGPT correctly gave the genders of the children and his city, the AI created such a horrific falsehood out of thin air.

A spokesperson for Noyb said they were unable to determine why ChatGPT generated such a specific but false history for this person. The spokesperson said they conducted research to make sure it wasn't a mix-up with someone else and reviewed newspaper archives, but were unable to find out why the AI was making up child murders. Large-scale language models, such as the one underlying ChatGPT, essentially make large-scale next-word predictions, so it's safe to assume that the dataset used to train the tool contained many stories about infanticide.

Whatever the reason, this output is clearly unacceptable, and Noyb argues that it violates EU data protection rules. Although OpenAI displays a small-print statement at the bottom of the screen that says "ChatGPT may be in error, please verify important information," Noyb argues that this doesn't absolve AI developers of their responsibility under the GDPR to not generate grossly false information about people in the first place.

While this GDPR complaint involves a specific individual, Noyb noted that there have been other instances of ChatGPT creating legally questionable information, such as an Australian colonel claiming to have been implicated in bribery and corruption scandals, and a German journalist wrongly labeled as a child abuser, suggesting that this issue is not an isolated case.

Notably, after an update to ChatGPT's underlying AI model, Noyb says the chatbot has stopped generating dangerously false information about Holmen. But Noyb and Holmen remain concerned that false and defamatory information about him may still remain in the AI model.

Noyb has filed a complaint against OpenAI with the Norwegian Data Protection Authority and is hoping that the regulator decides that it has the ability to investigate, as Noyb has targeted OpenAI's U.S. entity with the complaint, arguing that its Irish office is not the only subject responsible for influencing product decisions in Europe.

However, another previous Noyb-backed GDPR complaint against OpenAI, filed in April 2024 in Austria, was referred by the regulator to the DPC in Ireland because OpenAI had earlier that year designated its Irish branch as a provider of ChatGPT services to regional users.

Asked for an update on the progress of the ChatGPT's "illusion" investigation, Risteard Byrne, Assistant Chief Communications Officer of the Data Protection Commission of Ireland, said, "The DPC has formally begun to deal with the complaint, following a referral from the Austrian supervisory authority in September 2024 and this is still in progress. The process is still ongoing." He did not say when the DPC's investigation into ChatGPT's "hallucinations" was expected to be concluded.

statement:The content of the source of public various media platforms, if the inclusion of the content violates your rights and interests, please contact the mailbox, this site will be the first time to deal with.
Information

Step-Star Step-Video-TI2V Graph-generated Video Modeling Open Source: Controllable Amplitude of Motion and Lens Motion

2025-3-20 20:21:29

Information

DeepMind Senior Scientist Leaves Google to Start Robotics Startup, Gets NVIDIA Investment

2025-3-20 20:24:14

Search