Artificial Intelligence Startups Anthropic up to dateThe policy change has attracted attention. The company announced that it would allowMinorsThe move to allow developers to use its generative artificial intelligence system, subject to specific safety requirements, has sparked widespread discussion and concern in the industry.
Under Anthropic's new policy, teens and children will be able to use third-party applications powered by its artificial intelligence models under certain conditions. This move is seen as having potential benefits for education and personal problem solving. However, the challenge that comes with it is how to ensure that minors use artificial intelligence tools safely.
Anthropic lists safety measures that developers should take, such as age verification systems, content review, and educational resources. At the same time, the company said it will regularly audit apps for compliance, terminate accounts that violate regulations, and require developers to clearly state that they adhere to compliance requirements.
The policy adjustment also reflects a trend across the industry. Competitors including Google and OpenAI are also exploring more use cases for generative AI for children. Last year, OpenAI worked with Common Sense Media to develop child-friendly AI guidelines, and Google renamed its chatbot Bard to Gemini, specifically for teenagers.
However, the potential risks of generative AI have also attracted much attention. Some surveys show that some children have seen their peers use generative AI in a negative way. Therefore, guidance and supervision of children's use of generative AI has become particularly important.
In this era full of opportunities and challenges, Anthropic's policy adjustment has triggered in-depth thinking and discussion on the use of artificial intelligence by minors. How to balance the benefits of using artificial intelligence while protecting the safety and privacy of minors will be a key issue for future development.