Ways to improve your prompts
- Be specific. LLMs are like skilled interns who has recently joined. To make the most out of them, try to be very clear and precise. For example, instead of asking “Tell me about Sam Altman”, you should try to be more specific and ask “Tell me more about Sam Altman’s first company” or “personal life”, etc.
- Instead of using descriptive text, write easy to follow step-wise instructions.
- Don’t use negative statements like don’t do . . . . Instead turn them into positive do . . . . statements. Just like “The Secret”, LLMs also don’t care if its a positive statement or negative statement. They just generate the most probable next token.
- Add more structure to your prompt. Make use of single, double & triple quotes, use brackets, tags and pseudo code. LLMs are trained on good chunk of GitHub so they are really good at understanding structured text.
- Models can take different “Personas”. Prime the model by adding phrases like “You are an expert translator”, “You are a master of . . . ”, “Pretend you have 150 IQ . . .”, “Imagine you are Shakespeare”, etc. This can be really useful when generating text for some creative work.
- Chain of thoughts (COT). Asking the model to show its work can significantly improve the result particularly for reasoning tasks. Add phrases like “lets think step by step”, “show your work”, “First, . . .” etc. at the end of your prompt.
- Ask for Reflection. You can ask the model to assess its own work making the model work harder and come up with better output. Ask questions like “Do you think its the correct answer?”, “Did you meet the assignment?” or simply click “regenerate” button.
- Lastly, combine multiple trick together.
- Condition on good performance. LLM’s don’t want to succeed. They want to imitate. You want to succeed, and you should ask for it.
Notes
- Why are LLMs called “auto-regressive” models ? Because they generate output tokens one at a time in an autoregressive manner. The term “auto-regressive” refers to the process of predicting the next token in the sequence based on the previous tokens.
1. Learn to Spell : Prompt Engineering
- For text generation models, prompt is a portal to alternate universe. You are basically conditioning the LLMs output.
- For instruction-tuned models, prompt is a wish. You basically tell the model what you want. You can think of it as your assistant. The more clear and concise you are with you instructions, the better results you will get.
- Use low-level patterns: Instead of using terms that require background knowledge to understand, use examples about the expected output. Few-shot learning or In-context learning.
- Itemizing Instructions: Turn descriptive text into simple to follow instructions. Also, turn negative statements into assertions. For example, instead of saying “Don’t produce garbage text” you can say “only produce correct or reasonable text”.
- Models can take “Personas”. The quality for personas may vary but its good to try.
- Add more structure to your prompt. Use “”, ```, brackets, pseudo code, tags, etc. These LLMs are trained on good chunk of GitHub code so they are really good at it. Furthermore, unlike English and other human languages, programming languages have limited set of grammar so its easy to learn.
- Question → Think → Answer, Reasoning, Chain of thought prompting (show your reasoning) (lets think step by step)→
- self Ask, Recheck
- Compose all these tricks
2. ChatGPT Prompt Engineering for Developers