OpenAI offers a range of:Strategies and techniquesto help users use ChatGPT more effectively. These methods can:Use alone alsoCan be combinedfor better results. The official gives 6 big tip strategies (and gives specific tutorials and examples).Main Strategies:
1. Clear instructions:
Tell the AI exactly what you want. For example, if you want a short answer, just say "give me a short answer". This way the AI doesn't have to guess your intentions. Models can't read your mind. If a short answer is required, ask for it explicitly;If expert-level writing is required, make that clear as well. Provide clear instructions and reduce the need for model guesswork. Example:
How:
Include details in your queries to get more relevant answers.
Require the model to adopt a specific role or style.
Use separators to clearly indicate different parts of the input.
Clearly specify the steps required to complete the task.
Provide examples to help the model understand the task.
Specifies the desired length of the output.
2. Provide reference text:
If you have specific information or examples on the topic you're writing about, give it to the AI. This way it can provide more accurate and relevant content. Language models can create fake answers, especially when asking about a specific topic or asking for citations and URLs. Providing reference text can help the model provide more accurate answers.
Example: How to do it: Guided model uses reference text to answer questions.
Ask the model to reference the text when responding.
Break down complex tasks into simple subtasks
If you have a complex topic to write about, try to break it down into smaller sections. For example, write a section about the topic first, and then a section about the main idea. Just as complex systems are broken down into modular components in software engineering, a similar approach should be taken when submitting tasks to language models. Complex tasks typically have higher error rates than simple tasks. Complex tasks can often be redefined as workflows for a series of simple tasks. Example:
How to do it: Use intent classification to identify the most relevant instructions for a user's query.
For apps that require long conversations, summarize or filter previous conversations.
Summarize long documents in segments and recursively construct full summaries.
4. Give the model time to "think":
Sometimes, asking the AI to "think" before answering a question can lead to a better answer. It's like having it list the steps to solve the problem first and then give the answer. Models are likely to make more inference errors when answering questions immediately. Asking the model to do a "chain of thought" before giving an answer can help the model reason for the correct answer more reliably. Example:
Here's how: Coaching the model to figure out a solution on its own before rushing to a conclusion.
Use an internal monologue or a series of queries to hide the model's inference process.
In the previous answer, ask if the model is missing anything.
5. Use external tools:
Sometimes a combination of AI and other tools, such as data search tools, can lead to better results. Use the output of other tools to compensate for the shortcomings of the model. For example, a text retrieval system can provide relevant documentation information to the model, and an execution engine can help the model perform mathematical calculations and run.
Example: How to do it: Use embedded-based search for efficient knowledge retrieval.
Use execution to make more accurate calculations or call external APIs.
Give the model access to specific features.
6. Test and adjust:
Experiment with different instructions and methods to see which works best, and then adjust based on the results.
Example: "Evaluating model output with standard answers" is an effective way to ensure the quality of the AI model's answers.
Define a standard answer: First, determine what known facts should be included in the correct answer to a question. These facts form the criteria by which the AI responses are evaluated.
Model query vs. fact: Use a model query to generate an answer, and then check how many required facts are included in that answer.
Assess the completeness of an answer: Evaluate the completeness and accuracy of an answer based on the number of facts contained in it. If a *** contains all or most of the required facts, then the answer can be considered to be of high quality.
This strategy is particularly useful for scenarios that require precision and detail, such as scientific, technical, or academic research. By comparing it with the standard answers, the output quality of the AI model can be effectively monitored and improved.
prompt engineering just-in-time engineering strategy:
prompt examples:
libraries & tools
Tip Library & Tools:
s on advanced prompting to improve reasoning
openai cookbook: