According to foreign media reports,AI startupsAntropic has been facing a tough time lately. The company'sAI ProductsAfter Claude 2.1 was launched, many users found that it became difficult to communicate and use, often refusing to execute commands for no apparent reason.
The root of the problem is that Claude 2.1 has become overly cautious and law-abiding in its safety and moral judgments by strictly adhering to its published constitution of AI ethics. This resulted in Antropic having to sacrifice some of its product performance in its pursuit of AI security.
The result is that a large number of paying subscribers have expressed strong dissatisfaction. They have taken to social media platforms to complain that Claude 2.1 is "dead" and have indicated that they are ready to cancel their subscriptions in favor of competitor ChatGPT.
Industry insiders point out that Antropic's predicament once again highlights the dilemma startups face in the AI space. Strict self-regulation to ensure AI safety is undoubtedly a step in the right direction, but over-consideration of ethical and legal ramifications could cost companies a head start and put them at a disadvantage in the growing competition.
Antropic itself is deeply in crisis. But to maintain security without losing flexibility is a challenge for any AI company. The industry will be watching further to see how Antropic responds to the current dilemma and KEEP the users who are flowing to its competitors.