Anthropic modifies its service policy: allowing third parties to use its own AI models such as Claude in "products for minors"

Anthropic Last week, the company updated its service policy and announced that it would allow minors to use its services starting from June 6. AI ModelsServices, and at the same time, the terms more clearly prohibit the use of relevant AI for purposes such as "violating user privacy."

Anthropic modifies its service policy: allowing third parties to use its own AI models such as Claude in "products for minors"

The first thing Anthropic did was to change the wording of their EULA from "Acceptable Use Policy" to "Use Policy".A relatively strong statement of user responsibility.

Additionally, Anthropic announced that while they prohibit users under 18 from using their Claude series of AI models, but the company claims that they noticed the potential educational use of the Claude model.Therefore, after careful consideration, the company now allows the Anthropic API to be integrated into "products for minors".

Anthropic also updated its terms, explicitly prohibiting the use of their AI to develop systems or technologies that can recognize human emotions, and additional safety measures must be followed in "high-risk usage scenarios" (such as medical decision-making, legal guidance, finance, insurance, academic testing, media content generation), otherwise users will be responsible for their own actions.

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

Oracle launches Oracle Code Assist, a programming assistant that can write Java programs using AI

2024-5-13 14:30:16

Information

Researchers use AI to identify artworks on eBay and find 40 fakes

2024-5-13 14:32:16

Search