Get the most from your AI


How can we get more useful results from our interactions with AIs/LLMs? Better prompts.

I mentioned prompts in a previous column. Prompts are how we tell the LLM we want, by typing or, with some LLMs, speaking. Today we’ll delve a little more deeply into how to make our prompts better and in doing so, improve the responses we get. I’ll be giving you suggestions or rules for how to get better responses from your AI. There are many different approaches. You can find them all over the Internet. Your LLM can even help you if you ask it. If you have an approach that seems to work for you, by all means stick with it. And just be aware that I’ll be using AI and LLM interchangeably throughout.

There are several ways to write prompts but in all cases you must be clear and concise as to what you want the LLM to accomplish. Even though your LLM may seem like a human who can infer things about the task, it isn’t. If you aren’t clear then the AI will give you an answer that it decides to give you based on the most common answers. “Who won the big game?” might give you the answer you’re looking for, but maybe not. “Who won last night’s big game?” is more specific, but if there were several games last night it could easily not give you the answer you’re after. “Who won last night’s game between the 49ers and the Bengals?” is really specific and should get you the answer you’re after.

Try to keep your prompt as simple as possible but include all details that you’re after. If you want the final score in addition to who won, include that in your prompt: “Who won the game last night between the 49ers and the Bengals and what was the final score?” You can pack all your questions into a single prompt or you can ask follow-up questions: “who was the most valuable player?” The AI can retain context; that is it can “remember” what you’ve asked in this conversation and what it has answered. But its memory isn’t infinite.

All LLMs have what is known as a “context window” which is how many tokens (remember, we discussed tokens in a previous article) it can retain and, if you look, you can find the size of each LLM’s context window. Some are impressively long. Claude 2 from Anthropic has a context window of 100,000 (100k) tokens or around 75,000 words. The novel “The Great Gatsby” is only about 72,000 tokens. ChatGPT has several versions and its context window can be up to 16,000 tokens. The larger the context window, the longer the conversation you can have with the AI.

If the answers seem to be wandering or the LLM isn’t answering you in context then it’s likely you have exceeded the context window and maybe you should start over. You can do that by simply saying “let’s change subjects” or “let’s start over.” That should tell the AI to wipe its context window clean and start over. Some LLM interfaces will have a button to start a new conversation or, like Bing, will have a paintbrush icon that, when selected, wipes the context window and starts a new conversation.

So, the first rule is: Be precise and tell it exactly what you’re after. Tell the AI what you want and give it enough information to give you the answer you’re after.

The second rule is continue the conversation. If the LLM isn’t giving you the kind of answer you’re after, try to continue the conversation and provide more details. This is especially important if you’re trying to accomplish a complex task like setting an itenerary for a visit to a new place.

The third rule: Don’t be afraid to start over. Sometimes the conversation will just go awry. Rather than trying to fix it, maybe just start over.

The fourth rule: Do not give it personal information! Certainly don’t give it your social security or other identifying number and that includes your phone number, your address, your age and so on. Assume that the LLM will get hacked and all that information will become available to the hackers.

The fifth rule: Don’t be afraid to try a different AI. LLMs are trained differently and one could give you a better answer than another. If things aren’t working out, switch.

One last rule: Don’t be afraid to check your AI’s answer. If you’ve asked it about something with which you have no experience then it wouldn’t hurt to do a quick search to see if the answer makes sense. And if you already know something about the subject, trust your knowledge. If the AI says to boil your pasta until it’s dry and you’re looking for well cooked pasta, don’t believe it.

That’s all for this week’s column. I hope this will help you get more out of your interactions with your AI.

As always, my intent with these columns is to spark your curiosity, give you enough information to get started, and arm you with the necessary keywords (or buzzwords) so you’ll understand the basics and are equipped to search for more detailed information.

Please email me with questions, comments, suggestions for future columns, to sign up for my newsletter, or whatever at [email protected].

You can read the original columns in the Hillsboro Times Gazette at That will take you to the most recent column in the newspaper. You can read all my columns and sign up for my newsletter so they’ll be delivered to your email when I publish them at

Tony Sumrall, a Hillsboro native whose parents ran the former Highland Lanes bowling alley, is a maker with both leadership and technical skills. He’s been in the computing arena since his graduation from Miami University with a bachelor’s degree in systems analysis, working for and with companies ranging in size from five to hundreds of thousands of employees. He holds five patents and lives and thrives in Silicon Valley which feeds his love for all things tech.

No posts to display