Prompt Engineering Tips
By Dave Ingram and Dan Kim
Reinsurance News, December 2024
Imagine that you need to write a script for a client presentation and you decide to ask a Large Language Model (LLM) for help. So, you go to your favorite LLM and tell it to "Explain the basic concepts of life reinsurance.” Try it and see what help you get. It might not be bad, but it certainly isn’t something that will win you business.
Now imagine that you have a coworker who is an expert at writing queries to LLMs, which she calls prompts. She talks to you for a few minutes about the situation and she suggests this, "As an expert in life reinsurance, outline an advanced strategy that a mid-sized life insurance company could use to optimize its risk management portfolio through reinsurance. Your response should include an analysis of the current market trends influencing life reinsurance, a comparison of proportional and non-proportional reinsurance agreements in terms of their impact on capital management, and a step-by-step plan for implementing a reinsurance program. Additionally, address the potential regulatory challenges and suggest ways to enhance the insurer-reinsurer relationship to ensure long-term success. I am a reinsurance broker and I will be using your response as the basis for an initial client presentation, so make it compelling. Your response should be in bullet points so that it can be made into a pitch deck.”
Try both and see. You will definitely notice a big difference in the response that you get.
We hope with the following Prompt Engineering tips to give you the ammunition to write your prompts like an expert.
- You can start your prompt by assigning an identity to the LLM. “You are an expert tree surgeon” if you want advice about the health of your trees. “You are a friendly companion” if you just want conversation. “You are a math tutor” if you want help with your daughter’s homework. “You are an experienced personal trainer of former athletes trying to regain their youthful figure” if you want advice about your workouts. This identity assignment helps the LLM to have a better idea of what sort of answer to give you, how technical and to what level of detail.
- It also helps if the LLM knows about you. It seems intrusive, but if you think about it, if you do not reveal something about yourself, the LLM will likely give you an answer directed to a generic person and you probably are not a generic person. The sorts of things that the LLM would find helpful are your professional background, level of experience and expertise, and subject matter interests. These things can go into every prompt, or in ChatGPT you can enter them just once in the custom instructions. I have noticed that since using the custom instructions, ChatGPT will often use something from my background (Insurance Risk Management) when it gives an example as a part of a response. That seems like a nice touch.
- ChatGPT, in particular, loves to give you answers in the form of bullet point lists. But sometimes you don’t want that. Just saying “no bullets” doesn’t always work. However, it does seem to work better to give a positive statement about the format you do want. I usually do both, saying, “Answer in prose with paragraphs and sentences with no bullets.”
- Sometimes, however, I want to look at a high volume of similar data and a table allows me to focus on the comparisons. You can get whichever format you want, but usually you can only get it the first time if you include your desired format in your prompt. Other possible formats include Timelines, Codeblocks and downloadable files.
- One more thing that you might want to mention as a part of your prompt most of the time is an indication of what you are planning to do with the response. Sometimes I tell the LLM that I am preparing a board presentation, sometimes teaching a class, and if I want a compelling response, I say that I am trying to make a sale. The difference in response is often subtle but real.
- Usually, I am just fine with the tone of voice that the LLM uses to respond to my prompt. But I saw a list of other tones that you could ask for and several looked interesting: Critical—looking at both sides of an issue as opposed to the relentless positive attitude that is normal. Persuasive—for when I need to use the response to win someone over. Narrative—a storytelling approach, not sure when I would use this, but it sounds interesting. Direct—straight to the point, probably what I want in general, just the facts.
- It is supposed to help if you specify how you are going to use the response. Often, I just say that I am looking for a blog post. Sometimes I ask for an essay. I recently asked ChatGPT whether it saw any difference between the two and I was surprised to learn that it saw a big difference. An essay is more formal and longer than a blog post, possibly much longer. Blog posts are likely to be opinions, unsupported by a logical argument, while essays are well-reasoned discussions that rely on facts and logic to reach any conclusion. With a blog post, you are hoping to provoke an immediate reaction, while an essay wants the reader to spend some time thinking about what has been said. So be careful which term you use. I will.
The above are the basics of prompting. You should consider including those ideas with every prompt. The rest of these tips are things that we have learned either from our own trial and error or from others. If you try something here and find it useful, please pass it along.
- When I have completed a discussion of a topic where I think that I have considered all of the important issues and explained how my conclusion is the best for the problem, I will feed my discussion back into the LLM and ask “is there anything else that should be considered?” Usually, it will give me three to five additional subtopics. Some of those may be just a different way of saying something that I have already covered. Other points are really minor items that I feel totally comfortable leaving out. But, many times there is at least one thing that I left out and should have included. And that makes this step worthwhile.
- Sometimes, I ask the LLM to rewrite something and it just isn’t very good. I have heard that an LLM might give a different answer if asked a question a second time, so I just ask it to rewrite the response. But it seems that all too often the “rewrite” is close to word for word the same. What I have learned is that I should be more descriptive about what I want. Using “Rewrite” without any modifiers doesn’t give any directions. Streamline or Elaborate would each give a different response. I am thinking that I often want it to “Embellish” or “Enliven.” I might want it to “Soft-pedal” when I should be asking it to “Sensationalize.”
- When I try to ask questions to LLM, responses are sometimes too generic or less relevant to my intention. It helps me refine and make my questions more specific. This iterative process improves my question-framing skills for interactions with actual people.
- Requesting a summary of a long article helps you decide if it's worth reading in full. This is useful when facing many articles and needing to prioritize. You can specify the desired summary length, like "summarize for a one-minute read" or "summarize in 200 words or less."
- When seeking edits for your writing, ask to preserve your style and tone. This helps maintain your unique voice and character in the piece.
- For summaries of actuarial documents, I explicitly say, "avoid any translation or substitution of actuarial terms in the document" to prevent LLMs from replacing specific terms with more common words, which may not be accurate.
- Corporate financial report publications are often lengthy and include technical details. You can ask for key market trends, unexpected events, key risks, and takeaways. Also, you can ask if there is any material information that may impact potential investment in the company.
- Ask for a SWOT analysis. But don’t be surprised if you don’t get it on the first try; it often seems lazy. It sometimes only does two or three of the four topics. You have to read carefully. Sometimes the strengths are not at all strong, or the opportunities are not so hot. Or it simply stops before doing all four. But don’t let it get away with that. Tell it to“go back and tell you a really strong strength” or whatever is not up to par.
- Many people are concerned about “hallucinations” when the LLM comes back with an answer that sounds good, but turns out to be totally made up. There are lots of ways that this can happen, and there are a few ways to detect this issue. One way, when you suspect something might be a hallucination is to ask for sources from a response. One answer I have seen several times to that question is “that is what [that type of person] would likely say,” meaning that the LLM has reasoned itself to telling you something that it decided is a likely true answer, but with no actual evidence. Another even better way to check up on the veracity of a response is to ask the LLM for its confidence in a response. ChatGPT tells me that a request “for my confidence in a response is an invitation to reflect on and communicate the robustness and reliability of the answer based on the available data, sources, and context.” It goes on to tell me that a poorly articulated request can lead to low confidence if the LLM is not really sure what the user is asking for.
- Word of the week—“Burstiness”—it comes originally from telecommunications and refers to messages that come in bursts rather than a steady stream of data. It has come into use to describe something that differentiates human writing from AI writing, with humans showing more burstiness in our writing with a mix of longer and shorter sentences. So, if you want something written by AI to seem more human, ask it to increase the burstiness of what it writes.
- Second word of the week—“Perplexity”—which originally meant the ability of something to perplex or confuse someone. Also used when looking for differences between human generated and AI generated prose, in that context is used to mean the degree to which a diversity of vocabulary and variety of sentence structures is used, with human writing usually having higher perplexity than AI writing. So, asking AI to write with greater perplexity is another way of trying to get it to seem more human.
With these tips, we hope that you can fine-tune your approach to using LLMs and with that fine-tuning, get more usable responses to your prompts.
Statements of fact and opinions expressed herein are those of the individual authors and are not necessarily those of the Society of Actuaries, the newsletter editors, or the respective authors’ employers.
Dave Ingram is a researcher, writer and part-time consultant on risk and risk management. He serves as a member of the board of the Society of Actuaries (2021–2027) and has served in a variety of volunteer roles for the SOA, IAA and ASB. He has recently taken up working with Large Language Models and has created over 50 custom chatbots on ChatGPT and Poe.
Dan Kim, FSA, CERA, MAAA is head of annuity for Talcott Resolution. Dan can be contacted at dan.kim.actuary@gmail.com.