OpenAIFaced with a thorny problem: How to deal with students using ChatGPT to cheat? Although the company has developed a reliable method to detect essays or research reports written by ChatGPT, the technology has not been released publicly due to widespread concerns about students using AI to cheat.
OpenAI has successfully developed a reliable technology to detect ChatGPT-generated content. This technology achieves a detection accuracy of up to 99.9% by embedding "watermarks" in AI-generated text. However, it is puzzling that this technology, which could have solved the urgent problem, has not been released to the public. According to insiders, this project has been debated within OpenAI for nearly two years, and it was ready for release as early as a year ago.
The factors hindering the release of this technology are complex. First, OpenAI faces a dilemma: should it stick to the company's commitment to transparency or maintain user loyalty? An internal survey showed that nearly one-third of ChatGPT's loyal users are opposed to anti-cheating technology. This data undoubtedly puts tremendous pressure on the company's decision-making.
Second, OpenAI is concerned that the technology may have a disproportionately negative impact on certain groups, especially non-native English speakers. This concern reflects a core question in AI ethics: How to ensure fairness and inclusiveness in AI technology?
However, at the same time, the demand for this technology in the education sector is becoming increasingly urgent. According to a survey by the Center for Democracy and Technology, 59% of middle and high school teachers are convinced that students are already using AI to complete their homework, an increase of 17 percentage points from the previous school year. Educators are in urgent need of tools to meet this challenge and maintain academic integrity.
OpenAI's hesitation has sparked internal controversy. Employees who support the release of the tool argue that the company's concerns pale in comparison to the huge social benefits the technology could bring. This view highlights the tension between technological development and social responsibility.
There are also some potential problems with the technology itself. Despite the high detection accuracy, some employees are still worried that the watermark may be erased by simple technical means, such as through translation software or manual editing. This concern reflects the challenges faced by AI technology in practical applications.
In addition, how to control the scope of use of this technology is also a thorny issue. Using it too narrowly will reduce its practicality, while using it too broadly may lead to the technology being cracked. This balance requires careful design and management.
It is worth noting that other technology giants are also taking action in this area. Google has developed SynthID, a watermark tool for detecting text generated by its Gemini AI, although it is still in the testing phase. This reflects the importance that the entire AI industry attaches to content authenticity verification.
OpenAI has also prioritized the development of audio and visual watermarking technology, especially in a U.S. election year, a decision that highlights the need for AI companies to consider broader societal impacts in their technology development.
Reference: https://www.wsj.com/tech/ai/openai-tool-chatgpt-cheating-writing-135b755a?st=ejj4hy2haouysas&reflink=desktopwebshare_permalink