OpenAI responds to the "ChatGPT going crazy" issue: token prediction is the root cause

Recently,ChatGPTThe incident went out of control unexpectedly, and users reported receiving confusing and shocking responses, which sparked widespread discussion on social platforms such as Reddit and HN.OpenAIRespond quickly and confirm that the problem is caused bytokenprediction", and stated that it had been successfully repaired.

When ChatGPT users asked questions to the model, they suddenly found that its replies became illogical and even repeated. This abnormal phenomenon caused an uproar on social media, and users exposed the weirdest replies they had ever received, describing the model as insane.

OpenAI responds to the "ChatGPT going crazy" issue: token prediction is the root cause

On social media platforms, a user posted a video asking ChatGPT for music recommendations, but received an unreasonable response; another user tried to ask for travel recommendations, but the model's answer became rambling. Even more interesting is that some users reported that GPT-4 kept outputting "Happy Listening!🎵" and completely lost its mind.

OpenAI officialFirstThe bug in ChatGPT was confirmed at the time, and after an expedited fix, the problem was solved. According to the official explanation, the problem occurred when optimizing the user experience on February 20, 2024, and an error was introduced that affected the way the model processed language. Specifically, it involved the step of selecting numbers for the model, which led to confusion related to the probability prediction of the next token.

In addition to the official explanation, there are also analyses in the community pointing out that the problem with GPT-4 may be related to the word segmenter. In a class Karpathy gave online, he mentioned the problem of weird output of large models, in which the word segmenter may cause the model to perform poorly in spelling-related tasks. This view is supported by some users and has also triggered deeper attention to model training and optimization.

The out-of-control incident of ChatGPT has triggered a discussion on the security and stability of large language models. Although OpenAI has quickly made a fix, the incident also reminds us that while advancing artificial intelligence technology, we need to pay more attention to the stability and potential risks of the model.

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

AI website builder 10web plans to bring AI website building to WordPress

2024-2-24 8:27:12

Information

Tencent Cloud Smart Eye launches new "near and far liveness" mode to improve the security of liveness detection such as AI face-changing

2024-2-24 8:30:16

Search