Philosophy researchers at the University of Glasgow in the UK talk about "AI hallucination": it is more accurate to describe it as "nonsense"

For a long time, people have often referred to the seemingly reasonable but error-ridden answers provided by large language models as “AI Hallucinations(AI hallucinations)". However, fromU.K.University of GlasgowThree philosophy researchers from the University of California, Berkeley recently put forward a different view - the description of "AI illusion" is not accurate.

On June 8, local time, the journal Ethics and Information Technology published a paper by three researchers. The paper pointed out that the chatbot's "fabricated" responses should not be called "illusions" and it would be more accurate to describe it as "bullshitting."

The researchers pointed out that anyone who has studied psychology or used psychedelic drugs knows that "hallucination" is usually defined as seeing or perceiving something that does not exist. In the field of AI, "hallucination" is obviously a metaphor, and large language models cannot see or perceive anything at all. AI is not experiencing "hallucinations", but is replicating human language patterns in training data, and does not care about the accuracy of facts.

The machines are not trying to communicate what they believe or perceive, and their inaccuracies are not due to misunderstandings or hallucinations. As we have pointed out, they are not trying to convey information at all, they are nonsense.

The researchers argue that AI models have no beliefs, intentions or understanding, and that their inaccuracies are not due to misunderstandings or hallucinations, but rather because they are designed to create text that looks and sounds correct, without any "internal mechanisms" to ensure factual accuracy.

Philosophy researchers at the University of Glasgow in the UK talk about "AI hallucination": it is more accurate to describe it as "nonsense"

Philosophy researchers at the University of Glasgow in the UK talk about "AI hallucination": it is more accurate to describe it as "nonsense"

The phenomenon of "AI hallucination" has been reported many times, such as the recent "Google search recommended users to add glue to pizza", and Musk's Grok "mistakenly believed" that he was a product of OpenAI, etc.

Last year, Cambridge Dictionary announced that the word of the year for 2023 is "hallucinate", which originally means to seem to see, hear, feel or smell "something that is not there", usually referring to hallucinations caused by users in poor health or taking drugs. With the rise of AI, hallucinate has been extended to AI producing hallucinations and generating false information.

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

CampusAI raises $10 million in seed funding to create a metaverse for learning AI skills

2024-6-12 9:24:40

Information

ByteDance related personnel responded to the "development of AI mobile phones": it is actually a large model software solution based on mobile phones

2024-6-12 13:55:19

Search