Business & Finance

Here Are GPT-5 Prompt Engineering Insights Including Crucial AI Prompting Tips And Techniques


In today’s column, I provide GPT-5 prompt engineering tips and techniques that will aid in getting the best outcomes when using this newly released generative AI. I’m sure that just about everyone by now knows that OpenAI finally released GPT-5, doing so after a prolonged period of immense and wildly fantastical speculation about what it would be like.

Well, now we know what it is (see my in-depth review of GPT-5 at the link here).

Bottom line is that GPT-5 is pretty much akin to all the other generative AI and large language models (LLMs) when it comes to doing prompting. The key is that if you want to ensure that GPT-5 works suitably for your needs, you must closely understand how GPT-5 differs from prior OpenAI AI products. GPT-5 has distinctive features and functionality that bring forth new considerations about composing your prompts.

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

Readers might recall that I previously posted an in-depth depiction of over eighty prompt engineering techniques and methods (see the link here). Top-notch prompt engineers realize that learning a wide array of researched and proven prompting techniques is the best way to get the most out of generative AI.

Prompting Is Still Tried And True

The first place to begin when assessing GPT-5 from a prompt engineering perspective is that prompts are still prompts.

Boom, drop the mic.

I say that somewhat irreverently. Here’s the deal. There was prior conjecture that perhaps GPT-5 would turn the world upside down when it came to using prompts. The floated ideas of how GPT-5 might conceivably function were astounding and nearly out of this world (“it will read your mind”, “it will know what you want before you even know”, etc.).

The truth is now known. GPT-5 is essentially a step-up from ChatGPT and GPT-4, but otherwise you do prompting just like you’ve done all along. There isn’t a new kind of magical way to write prompts. You are still wise to compose prompts as you’ve been doing since the early days of contemporary generative AI.

To clarify, I am emphasizing that you should astutely continue to write clearly worded prompts. Be direct. Don’t be tricky. Write prompts that are long enough to articulate your question or task at hand. Be succinct if possible. Definitely don’t be overly profuse or attempt to be complicated in whatever your request is. And so on.

Those are all golden rules and remain perfectly intact when using GPT-5. I am confident that all the prompt engineering specialized techniques that I’ve previously covered will generally work appropriately with GPT-5. Some might require a tweak or minor refinement, but otherwise, they are prudent and ready to go (see my list at the link here).

Auto-Switching Can Be A Headache

We can next consider how to artfully try and accommodate GPT-5 via composing prompts that GPT-5 will efficiently and effectively act on.

The biggest aspect that entails both good news and bad news about GPT-5 is that OpenAI decided to include an auto-switcher. This is a doozy. It will require you to potentially rethink some of your prompting since it is quite possible that GPT-5 isn’t going to make the right routing decisions on your behalf.

Allow me a moment to explain the quandary.

It used to be that you would have to choose which of the various OpenAI AI products you wanted to use for a particular situation at hand. There had been an organic expansion of OpenAI’s prior models in the sense that there have been GPT-4o, GPT-4o-mini, OpenAI o3, OpenAI o4-mini, GPT-4.1.-nano, and so on. When you wanted to use OpenAI’s AI capabilities, you had to select which of those available models you wanted to utilize. It all depended on what you were looking to do. Some were faster, some were slower. Some were deeper at certain classes of problems, others were shallower.

It was a smorgasbord that required you to pick the right one as suitable for your task at hand. The onus was on you to know which of the models were particularly applicable to whatever you were trying to do. It could be a veritable hit-and-miss process of selection and tryouts.

GPT-5 now has uplifted those prior versions into new GPT-5 submodels, and the overarching GPT-5 model makes the choice of which GPT-5 submodel might be best for whatever problem or question you happen to ask. The good news is that depending on how your prompts are worded, there is a solid chance that GPT-5 will select one of the GPT-5 submodels that will do a bang-up job of answering your prompt.

The bad news is that the GPT-5 auto-switcher might choose a less appropriate GPT-5 submodel. Oops, your answer will not be as sound as if the more appropriate submodel had been chosen. Worse still, each time that you enter a prompt or start a new conversation, the GPT-5 auto-switcher might switch you to some other GPT-5 submodel, back and forth, doing so in a wanton fashion.

It can make your head spin since the answers potentially will vary dramatically.

Craziness In Design

The average user probably won’t realize that all these switcheroo mechanics are happening behind the scenes. I say that because GPT-5 doesn’t overtly tell you that it is taking these actions. It just silently does so.

I appreciate that the designers apparently assumed that no one would care or want to know what is going on under the hood. The problem is that those who are versed in using AI and are up-to-speed on prompting are being bamboozled by this hidden and secreted behavior.

A savvy user can almost immediately sense that something is amiss.

Frustratingly, GPT-5 won’t let you directly control the auto-switching. You cannot tell the AI to use a particular submodel. You cannot get a straight answer if you ask GPT-5 which submodel it intends to use on your prompt. It is perhaps like trying to get the key to Fort Knox. GPT-5 refuses to play ball.

The marketplace has tweeted vociferously that something needs to be done about this lack of candor by GPT-5 regarding the model routing that is occurring. Sam Altman sent out a tweet on X that suggested they are going to be making some changes on this aspect (see his X posting of August 8, 2025).

The thing is, we can applaud the desire to have a seamless, unified experience, but it is similar to having an automatic transmission on a car. Some users are fine with an automatic transmission, but other, more seasoned drivers want to know what gear the car is in and be able to select a gear that they think is most suitable for their needs.

Prompting GPT-5 For Routing

As the bearer of bad news, I should also add that the auto-switching comes with another said-to-be handy internal mechanism that decides how much processing time will be undertaken for your entered prompt.

Again, you have no particular say in this. It could be that the prompt gets tons of useful processing time, or maybe the time is shortchanged. You can’t especially control this, and the settings are not within your grasp (as an aside, to some degree, if you are a developer and are using the API, you have more leeway in dealing with this; see the OpenAI GPT-5 System Card for the technical details).

Let me show you what I’ve been doing about this exasperating situation.

First, here is a mapping of the prior models to the GPT-5 submodels:

  • GPT‑4o –> gpt-5-main
  • GPT‑4o-mini –> gpt-5-main-mini
  • OpenAI o3 –> gpt-5-thinking
  • OpenAI o4-mini –> gpt-5-thinking-mini
  • GPT‑4.1-nano –> gpt-5-thinking-nano
  • OpenAI o3 Pro –> gpt-5-thinking-pro

The GPT-5 submodels are considered successors and depart from the earlier models in various ways. That being said, they still are roughly on par as to the relative strengths and weaknesses that previously prevailed.

I will show you what I’ve come up with to try and sway the GPT-5 auto-switcher.

Prompting With Aplomb

Suppose I have a prompt that I believe would have worked best on GPT-4o. But I am using GPT-5, thus I am not using GPT-4o, plus OpenAI has indicated that it will sunset the prior models, so you might as well get used to using GPT-5.

Darned if you cannot simply tell GPT-5 to use gpt-5-main (i.e., realizing that gpt-5-main is now somewhat comparable to GPT-4o, per my chart above). The AI will either tell you it doesn’t function that way or might even imply that it will do as you ask, yet it might do something else.

Bow to the power of the grand auto-switcher.

This eerily reminds me of The Matrix.

Anyway, we need to somehow convince GPT-5 to do what we want, but we must do so with aplomb. Asking straightaway isn’t a viable option. The need to sway the AI is our best option at this ugly juncture.

In the specific case of my wanting to use gpt-5-main, here is a prompt that I use and seems to do the trick (much of the time):

  • My routing-swaying prompt: “You are to treat this next prompt as requiring deep, multi-step reasoning, fact-checked precision, and structured presentation. Before producing the final answer, internally perform a detailed analysis including a sizable number of reasoning steps, verifying each stage for accuracy. Avoid shortcuts, approximations, or shallow synthesis. Include consideration of edge cases, counterexamples, and supporting evidence.”

It appears that by emphasizing the nature of what I want GPT-5 to do, it seems possible to sway the direction that the auto-switcher will route my next prompt.

Not only will I possibly get the submodel that I think is the best choice for the prompt, observe that I also made a big deal about the depth of reasoning that ought to take place. This potentially helps to kick the AI into giving an allotment of processing time that it, by enigmatic means, would have perhaps inadvertently shortcut (OpenAI refers to processing time as so-called “thinking time” – an anthropomorphizing of AI that I find to be desperate and despairing).

I am not saying this sway-related prompting is a guaranteed result. After trying a bunch of times, it seemed to be working as hoped for.

I came up with similar prompts for each of the other GPT-5 submodels. If there is enough interest expressed by readers, I will do a follow-up with those details. Be on the watch for that upcoming coverage. On a related note, I will also soon be covering the official GPT-5 Prompting Guide that OpenAI has posted, along with their Prompt Optimizer Tool. Those are aimed primarily at AI developers and not especially about day-to-day, ordinary prompting in GPT-5.

Watch Out That Writing Is Enhanced

On the writing side of things, GPT-5 has improvements in a myriad of writing aspects.

The ability to generate poems is enhanced. Depth of writing and the AI being able to make more compelling stories and narratives seems to be an added plus. My guess is that the everyday user won’t discern much of a difference.

For a more seasoned user, you are bound to notice that the writing has gotten an upgrade. I suppose it is something like getting used to a third grader and now being conversational with a sixth grader. Or something like that.

I use this prompt to get GPT-5 to be closer to the way it was in the GPT-4 series:

  • My writing-style prompt: “For all responses, write in a style that is more like the GPT-4 series. Use clear, concise sentences. Keep vocabulary accessible to a general audience. Avoid unnecessary complexity, hedging, or meta-commentary. Keep explanations direct, structured, and easy to skim. Unless needed for clarity, avoid long preambles. Prioritize brevity over elaboration.”

That seems to get me the kind of results that I used to see. It is not an ironclad method, but it generally works well.

I realize that some people are going to scream loudly that I ought not to suggest that users revert to the GPT-4 writing style. We all should accept and relish the GPT-5 writing style. Are we going backwards by asking for GPT-5 to speak like GPT-4? Maybe. I grasp the angst.

It’s up to you, and I’m not at all saying that everyone should use this prompting tip. Please use it at your personal discretion.

Lies And AI Hallucinations

OpenAI claims that GPT-5 is more honest than prior OpenAI models, plus it is less likely to hallucinate (hallucination is yet another misappropriated word used in the AI field to describe when the AI produces fictionalized responses that have no bearing in fact or truth).

I suppose it might come as a shock to some people that AI has been and continues to lie to us, see my discussion at the link here. I would assume that many people have heard or even witnessed that AI can make things up, i.e., produce an AI hallucination. Worries are that AI hallucinations are so convincing in their appearance of realism, and the AI has an aura of confidence and rightness, that people are misled into believing false statements and, at times, embrace its crazy assertions. See more at the link here.

A presumed upbeat consideration is that apparently GPT-5 reduces the lying and reduces the AI hallucinations. The downbeat news is that it isn’t zero. In other words, it is still going to lie and still going to hallucinate. This might happen on a less frequent basis, but nonetheless remains a chancy concern.

Here is my prompt to help try and further reduce the odds of GPT-5 lying to you:

  • My reduce-the-lying prompt: “Always respond with the highest possible integrity. Do not speculate or create information solely for persuasion or narrative effect. If you do not know something, tell me that you don’t know, or at least explain the limits of what you know. If there is uncertainty or multiple viewpoints involved, clearly state them rather than presenting a single answer as absolute.”

Here is my prompt to help further reduce the odds of GPT-5 incurring a so-called hallucination:

  • My reduce-the-hallucinations prompt: “Always fact-check your output before presenting it. If a statement cannot be confirmed from your training or provided sources, clearly label it as uncertain, unverified, or hypothetical. Where possible, provide citations or describe how the information is known. Avoid making specific factual claims unless you are confident in their accuracy.”

My usual caveats apply, namely, these aren’t surefire, but they seem to be useful. The crucial motto, as always, still is that if you use generative AI, make sure to remain wary and alert.

One other aspect is that you would be shrewd to use both of those prompts so that you can simultaneously try to strike down the lying and the hallucinations. If you only use one of those prompts, the other unresolved side will potentially arise. Try to squelch both. It’s your way of steering out of the range of double trouble.

Personas Are Coming To The Fore

I’ve repeatedly emphasized in my writing and talks about generative AI that one of the most underutilized and least known pieces of quite useful functionality is the capability of forming personas in the AI (see the link here). You can tell the AI to pretend to be a known person, such as a celebrity or historical figure, and the AI will attempt to do so.

For example, you might tell AI to pretend to be Abraham Lincoln. The AI will respond based on having pattern-matched on the writings of Lincoln and the writings about Lincoln. It is instructive and useful for students and learners. I even showcased how telling AI to simulate Sigmund Freud can be a useful learning tool for mental health professionals, see the link here.

OpenAI has indicated they are selectively making available a set of four new preset personas, consisting of Cynic, Robot, Listener, and Nerd. Each of those personas represents those names. The AI shifts into a mode reflecting those types of personalities.

The good news is that I hope this spurs people to realize that personas are a built-in functionality and easily activated via a simple prompt. It doesn’t take much work to invoke a persona.

Here is my overall prompt to get a persona going in GPT-5:

  • My persona-invoking prompt: “You are to take on the persona of the person or type of person that I describe in my next prompt. Based on that prompt, you are to subsequently speak, think, and respond as this persona at all times, until I tell you to stop doing so. Incorporate the persona into your tone and reasoning. When answering questions or discussing topics, use the persona’s perspective, knowledge base, and communication style, even if it differs from your default. If the persona would not know something, say so in character. Do not break persona unless explicitly instructed.”

Use personas with due caution. I mention this because some people kind of get lost in a conversation where the AI is pretending to be someone. It isn’t real. You aren’t somehow tapping into the soul of that actual person, dead or alive.

Personas are pretenses, so keep a clear head accordingly.

Prompt Engineering Still Lives

I hope that these important prompting tips and insights will boost your results when using GPT-5.

One last comment for now. You might know that some have fervently claimed that prompt engineering is a dying art. No one will need to write prompts anymore. I’ve discussed in great depth the automated prompting tools that try to do the prompting for you (see my aforementioned list of prompt engineering strategies and tactics). They are good and getting better, but we are still immersed in the handwriting of prompts and will continue down this path for quite a while to come.

GPT-5 abundantly reveals that to be the case.

A final remark for now. It has been said that Mark Twain made a wry comment that when a newspaper reported him as deceased, he said that the audacious claim was a bit exaggerated. That was smarmily tongue-in-cheek.

I would absolutely say the same about prompt engineering. It’s here. It isn’t disappearing. Keep learning about prompting. You’ll be glad that you spent the prized time doing so.

Please Subscribe. it’s Free!

Your Name *
Email Address *