For a long time, people have often referred to the seemingly reasonable but error-ridden answers provided by large language models as “AI Hallucinations(AI hallucinations)". However, fromU.K.University of GlasgowThree philosophy researchers from the University of California, Berkeley recently put forward a different view - the description of "AI illusion" is not accurate.
On June 8, local time, the journal Ethics and Information Technology published a paper by three researchers. The paper pointed out that the chatbot's "fabricated" responses should not be called "illusions" and it would be more accurate to describe it as "bullshitting."
The researchers pointed out that anyone who has studied psychology or used psychedelic drugs knows that "hallucination" is usually defined as seeing or perceiving something that does not exist. In the field of AI, "hallucination" is obviously a metaphor, and large language models cannot see or perceive anything at all. AI is not experiencing "hallucinations", but is replicating human language patterns in training data, and does not care about the accuracy of facts.
The machines are not trying to communicate what they believe or perceive, and their inaccuracies are not due to misunderstandings or hallucinations. As we have pointed out, they are not trying to convey information at all, they are nonsense.
The researchers argue that AI models have no beliefs, intentions or understanding, and that their inaccuracies are not due to misunderstandings or hallucinations, but rather because they are designed to create text that looks and sounds correct, without any "internal mechanisms" to ensure factual accuracy.
Philosophy researchers at the University of Glasgow in the UK talk about "AI hallucination": it is more accurate to describe it as "nonsense"
The phenomenon of "AI hallucination" has been reported many times, such as the recent "Google search recommended users to add glue to pizza", and Musk's Grok "mistakenly believed" that he was a product of OpenAI, etc.
Last year, Cambridge Dictionary announced that the word of the year for 2023 is "hallucinate", which originally means to seem to see, hear, feel or smell "something that is not there", usually referring to hallucinations caused by users in poor health or taking drugs. With the rise of AI, hallucinate has been extended to AI producing hallucinations and generating false information.