December 5 News.Stanford UniversityJeff Hancock, founder of Social Media Labs and misinformation expert, admits in aCourt documentsArtificial intelligence tools are used in ChatGPT to help organize the citations, but there have been so-called "hallucinations". Critics point out that this has seriously undermined the credibility of the document.
1AI understands that Hancock submitted this sworn testimony to the court in support of Minnesota's "Influencing Elections Using Deep Fake Technology" bill. The bill is currently being challenged in federal court. The challengers include conservative YouTuber Christopher Khols and Minnesota Congresswoman Mary Franson, whose attorneys have found thatHancock's sworn testimony appears to contain non-existent citationsIt therefore considered the document "unreliable" and requested that it be excluded from the deliberations.
Hancock subsequently admitted to using ChatGPT, but denied using it to author any content. He stated that he had written and reviewed the content of the sworn testimony and was confident that each of its claims was supported by the latest academic research in the field and reflected his views as an expert on the impact of AI technology on disinformation and its impact on society.
In response to the issue of errors in citing literature, Hancock explained that he used Google Scholar and GPT-4 to identify articles that might be relevant to sworn testimony in order to integrate his known knowledge with new scholarship. He stated.He used GPT-4 to create the citation list, not realizing that the tool generated two false citations, known as "illusions," and added the wrong author to another.
Hancock stated, "It was not my intention to mislead the court or the attorneys. I sincerely apologize for any confusion this may have caused. Nonetheless, I remain firmly committed to all the substantive points of the affidavit testimony."
This incident highlights the importance of manual review and validation when using AI-assisted work.