Technology media Techcrunch reported yesterday (August 26),Anthropic The company released the Claude AI modelSystem prompt words” (system prompts).
In order to make the AI model better understand human instructions, the prompt project actually includes two core layers: user prompt and system prompt:
- User prompt words: The user enters the prompt words, and then the AI model generates answers based on the user prompt words.
- System prompts: These are system-generated prompts that are usually used to set the context of the conversation, provide guidance, or set rules.
Normally, system prompts will let the model understand its basic qualities and what it should and should not do.
Common practice in the industry
Every generative AI company, from OpenAI to Anthropic, uses system prompts to prevent (or at least try to prevent) bad behavior from occurring in its models and to guide the overall tone and sentiment of its responses.
For example, a system prompt might tell the model that it should be polite but never apologize, or that it should honestly acknowledge that it cannot know everything.
However, manufacturers usually keep these system prompts confidential for reasons such as competition and to prevent bad users from bypassing security protection after learning this information.
Anthropic Select Public System Prompt Word
However, Anthropic has been working to portray itself as a more ethical and transparent AI provider, and it has published system prompts for its latest models (Claude 3.5 Opus, Sonnet, and Haiku) on the Claude iOS and Android apps and on the web.
Anthropic plans to release this type of information regularly as it updates and fine-tunes system prompts, Alex Albert, head of developer relations at Anthropic, said in a post published on X.