Today I'm sharing 6Prompt wordtip, generalized to a variety of large language models (LLMs), not just theChatGPT, which also includes Wenxin Yiyin or various open source models that can help improve the output results.
Tip 1: Write clear and specific instructions Tip 2: Give the model time to think Tip 3: Multiple Tips Tip 4: Bootstrapping models Tip 5: Break down tasks or tips Tip 6: Use external tools
Tip 1: Write clear and specific instructions
Specifies the desired output format or output length.One way to do this is to have the model play a role. For example:
"Pretend you're a tech blogger." "Respond in about two sentences." "Give me a summary of this paragraph. Here's an example of a summary I like ___"
Provide examples.For example, this is the step of Few Shot Prompting:
First example (first shot): gives a prompt and the corresponding output (answer). Second example (second shot): gives a second hint and output. Your prompt: give your actual prompt words.
The model can now respond according to the pattern established in the first two examples.
Tip 2: Give the model time to think
Models are more likely to make inference errors when responding immediately.
passRequires a series of reasoningThis prompts the model to think progressively and more carefully. You can ask for "think step-by-step" or specify specific steps. This simple prompt is a very effective addition: "Think step by step." (Think step by step)
For example, if you ask the model to grade a student's exam question, you can prompt the model like this:
Step 1: Start by solving the problem yourself; Step 2: Compare your solution with the student's solution; Step 3: Complete your own solution calculations prior to evaluating student solutions.
Tip 3: Multiple Tips
When accuracy is most important (rather than latency or cost), generate multiple responses with different prompts and then determine the best answer.
Some of the things you can tweak include:
- Temperature: Moderate the randomness or creativity of large model responses. Higher temperatures give more varied, creative responses. Lower temperatures give more conservative, predictable responses.
- Samples (shots): refers to the number of examples given in the prompt. Zero-shot means that no examples are provided, one-shot means that one example is provided, and so on.
- Prompt: More directly or indirectly, requesting explanations, making comparisons, etc.
Tip 4: Bootstrapping models
Here are some examples:
- If the input is too long, the model may stop reading early. You can guide the model to gradually process longer content and recursively build complete summaries.
- Help it correct itself.It's hard for a model to self-correct if it answers incorrectly to begin with. "I received your explanation of quantum physics, are you sure of your answer? Can you start with the basics of quantum mechanics, re-examine and provide a corrected answer?"
- Don't ask questions that have a clear bias. The model is happy to "please" you, so lead but keep the prompts open and don't presuppose an answer in the question, for example:
"Do games cause violence?" (Bad question) "I would like an overview of the findings of the study on the relationship between Kwanzaa games and behavior without bias." (Good question)
Tip 5: Break down tasks or tips
Break down complex tasks into multiple simple tasks.The reason for this is that complex tasks have a significantly higher error rate than simple tasks.
"I'm going to Paris for three days and I need to know what to pack, the best restaurants, and how to use public transportation."
Intention 1: What to pack for a trip to Paris; Intention 2: Recommendations for the best restaurants in Paris; Intention 3: To provide guidance on how to use public transportation in Paris.
The AI processes each intent separately, providing customized suggestions for packing luggage, dining and getting around Paris, and then integrates these into one comprehensive answer.
Or, if the subtasks are interrelated:
Step 1: Decompose the task into queries. Step 2: Input the output of the first query into the next query.
Note: This may also reduce costs, as each step will cost less.
Tip 6: Use external tools
In general, if a task can be done more reliably and efficiently with a tool (compared to a larger model), then move it out to get the advantages of both. (This may not apply to non-developers, and can be ignored by casual hobbyists, but the reasoning is not to let a larger model do something it's not good at)
Here are some sample tools:
- calculator: The big models do not perform well in math. Their original goal was to generate tokens/words, not numbers. Calculators can significantly improve the math capabilities of LLMs.
- RAG(Retrieval-Augmented Generation): connecting the big model to external knowledge (public web or private knowledge bases) instead of just getting information from the context window.
- Code Execution:Execute and test model-created code using code execution or calls to external APIs.
- External Functions:Define functions for the model to write calls to. For example, send_email(), get_current_weather(), get_customers(). Execute these functions on the user side and return the response to the model.
Above, I hope this helps.