Having a structured approach to how you formulate your prompts will ensure that you get the most out of the AI tools you interact with. In prompt engineering, we categorise prompts according to their unique purpose.
Here we use the model to engage in a conversation to gain more knowledge or expertise on a topic. It is like having a conversation with a skilled colleague, where we ask questions or set up a scenario to enable it to generate a qualified response.
Here we ask the model to help us generate content. Whether it is drafting an email or a project plan, the model acts as a collaborator to generate new content.
Here we present the model with specific content and use its analytical capabilities. This could be a text that you copy into the model to have it analysed; while at the more complex end of the spectrum, it could be data that you include from IT systems. The goal is to gain a deeper understanding or new perspectives on the material.
Here we feed the model with content we want to change or improve. This could be anything from customising a message for a specific target audience to translating a text from one language to another. Here the model acts as a “designer” based on what we instruct it to do.
Prompt engineering is something of a science, and the better the prompts, the better the response – and the more help for your work. Here are three important elements to consider if you want to improve your prompts:
Give the model a persona. This way, it can mimic a specific role or area of expertise. For example, if we want to write a marketing text, we can instruct the model to assume the role of a skilled copywriter.
Try to include a description of the intended target audience in your prompt. This will help ensure that the content is aligned with the target audience’s expected level of understanding, and the model will respond based on the contextual needs of the target audience.
Make sure to provide the model with detailed information about challenges, goals and relevant knowledge relating to your task. This creates a solid foundation for more nuanced and relevant results.
Working with prompts in relation to large language models can be a complex task, as even minor adjustments can result in relatively large variations in the quality of the responses. Here are a few tips on how to achieve better results:
1. Consider the tone and style of your text, e.g. expert style, readability, conversational style, formal or informal.
2. Start with the end in mind. Consider what you want the language model to help you with and what kind of result you want, e.g. a table, an email or something else entirely. What will it take to be able to do this? If you can imagine this, try to describe it in your prompt.
3. Include a relevant and nuanced context. This could be relevant knowledge from your colleagues, a strategy document or something else entirely. In other words, do you need to acquire additional knowledge?
4. Try to rewrite and adapt your prompts once in a while to produce clear and useful responses. Even small changes can produce very different results.
5. Save the most successful prompts for later use (e.g. in a Word document or an Excel sheet), and share them with your colleagues – they may be working on similar tasks.
For those who are really interested in the topic, here are some articles we have read on the latest research in the field: