Chatbots talking nonsense? Oxford researchers use semantic entropy to see through AI "hallucinations"

In recent years, artificial intelligence has boomed andChatbotsApplications such as ChatGPT are gaining popularity, and people can get information from these chatbots (e.g., ChatGPT) through simple commands. However, these chatbots are still prone to "AI Hallucinations"questions, i.e., providing wrong answers and sometimes even dangerous information.

Chatbots talking nonsense? Oxford researchers use semantic entropy to see through AI "hallucinations"

Image source: Pexels

One of the reasons for the "illusion" is inaccurate training data, insufficient generalization and side effects of the data collection process. However, researchers at the University of Oxford have taken a different approach to the problem, and have developed a new method inThe latest issue of Natureon details a method they have newly developed for detecting large language models (LLMThe problem of "fabrication" (i.e., arbitrarily generated incorrect information) in the context of the "S" (s).

LLM generates answers by looking for specific patterns in the training data. But this doesn't always work, and just as humans can see animals in the shape of clouds, AI bots can find patterns that don't exist. However, while humans know that clouds are just shapes and that there are no giant elephants floating in the sky, LLM may see this as real and "make up" new technologies and other false information that doesn't exist.

Researchers at the University of Oxford used the concept of semantic entropy to determine whether the LLM is "hallucinating" through probability.Semantic entropy refers to situations where the same word has multiple meanings. Semantic entropy refers to situations where the same word has more than one meaning, e.g., "desert" can mean desert or abandonment of someone. When LLM uses such words, there may be confusion about the meaning of the expression.By detecting semantic entropy, the researchers aim to determine whether the output of the LLM is likely to be "hallucinatory".

The advantage of utilizing semantic entropy is that it can be used to quickly detect "hallucination" problems in LLM without additional supervised or reinforcement learning. Since the method does not rely on task-specific data, it can be applied even when the LLM is faced with a new task that it has never encountered before. This will greatly increase the user's trust in LLM, even when the AI encounters a problem or instruction for the first time.

The research team said, "Our approach helps users understand when they need to be cautious about the output of LLMs and opens up new horizons for LLM applications that would otherwise be limited by unreliability."

If semantic entropy proves to be an effective means of detecting "illusions", then we can use such tools to double-check the output of AI, making it a more reliable partner. However, IT House would like to remind that, just like humans are not infallible, LLM can still make mistakes even with the most advanced error detection tools. Therefore, it is still wise to always double-check the answers provided by chatbots such as ChatGPT.

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

OpenAI's GPT-4o outperforms human experts in moral reasoning, study finds

2024-6-24 9:16:27

Information

It doesn’t matter if there is no authorization. Several AI companies bypass network standards to crawl news publishers’ website content

2024-6-24 9:18:13

Search