Recently, the Wall Street Journal reported that artificial intelligence companies OpenAI A tool that can recognize ChatGPT-generated text with high accuracy has been developed, but it has not yet been officially released. In response, OpenAI admitted that it is researching text watermarking technology, but said that the technology still faces many challenges.
It is reported that OpenAI plans to focus on detecting text from ChatGPT rather than content generated by other companies' models through text watermarking technology. This technology will make slight adjustments to the way ChatGPT chooses words, creating an invisible "watermark" in the text, which can then be detected by specialized tools.
OpenAI said that text watermarking is just one of the solutions they are exploring, and other solutions include classifiers and metadata, which are designed to determine the source of the text. Although text watermarking technology performs well in some cases, its effectiveness decreases when faced with tampering such as translation, rewriting, or inserting special characters. In addition, the technology may have a disproportionate impact on specific groups such as non-native English speakers.
Given the above complex factors and their potential impact on the entire ecosystem, OpenAI said it will cautiously advance research on text tracing technology and give priority to the development of authentication tools for audiovisual content.
This decision has triggered extensive discussions in the industry on the identification and management of AI-generated content. With the rapid development of AI technology, how to strike a balance between protecting innovation and preventing risks has become the focus of attention of all parties.