Low Verbosity, High Effort: Prompting Secrets for Better GPT-5 Results
Z
Zack Saadioui
8/10/2025
Low Verbosity, High Effort: Prompting Secrets for Better GPT-5 Results
Hey there, so you've probably been hearing a ton about GPT-5. It's the new shiny toy on the block, & honestly, it's a pretty significant leap from what we were used to with GPT-4. But here's the thing a lot of people are discovering: just because the model is smarter doesn't mean you can get lazy with your prompts. In fact, it's kinda the opposite.
Turns out, to get the REALLY good stuff out of GPT-5, you need to be more intentional. The old ways of just throwing a vague question at it & hoping for the best are over. The new game is all about "low verbosity, high effort." It sounds a bit counterintuitive, right? Like, why would you say less to get more? But it's all about being incredibly precise, thoughtful, & strategic with your words. It’s less about writing a paragraph-long prompt & more about crafting a few, perfectly chosen sentences that guide the AI with surgical precision.
I've been digging deep into this, playing around with the new models, & talking to people who are getting incredible results. & I've put together this guide to share what I've learned. We're going to cover everything from the new way GPT-5 "thinks" to the nitty-gritty techniques that will make you feel like you have superpowers. So, grab a coffee, & let's get into it.
Understanding the New Brain: How GPT-5 is Different
First off, you gotta understand that GPT-5 isn't just a slightly more powerful version of its predecessors. OpenAI has fundamentally changed the architecture. They're calling it a "unified system." What that means is that instead of one giant model trying to do everything, there's a system of specialized models working together.
There’s a fast, efficient model for your everyday questions, & then there’s a deeper, more powerful “thinking” model for when you need some serious brainpower. A real-time router figures out which model is best for your query. This is a HUGE deal because it means the AI can be both quick & nimble for simple tasks, but also capable of deep, multi-step reasoning for the complex stuff.
This new setup has some cool new features we can play with, like:
Agentic Behavior: GPT-5 is designed to be more "agentic," which is a fancy way of saying it can take initiative, break down problems, & even use tools to get you an answer. It’s not just a text generator anymore; it’s more like a problem-solving partner.
Controllable Reasoning & Verbosity: OpenAI has given us new knobs to turn. The
1
reasoning_effort
parameter lets you decide how much "thinking" the model should do. You can set it to
1
minimal
for quick, straightforward tasks or crank it up for when you need it to really ponder a problem. Similarly, the
1
verbosity
parameter lets you control how chatty the response is, from
1
low
for concise answers to
1
high
for detailed explanations.
This is where the "low verbosity, high effort" idea really starts to shine. You can tell the model to be concise in its final answer (
1
verbosity=low
) but use a high degree of reasoning to get there. It’s like asking a brilliant expert to give you the bottom line without all the fluff.
The Core Principles: It's All About Clarity & Context
Even with all these new bells & whistles, the fundamental principles of good prompting haven't gone away. In fact, they're more important than ever.
Be Insanely Specific
These models can't read your mind, even though it sometimes feels like they can. The more you assume, the weirder the results will be. You have to spell EVERYTHING out.
Don't just say: "Write about my company."
Instead, try: "Write a 500-word blog post about my company, Arsturn. The tone should be friendly & helpful, aimed at small business owners. Focus on how our no-code AI chatbot builder helps businesses improve customer service & generate leads. Mention that it's trained on their own data for personalized conversations."
See the difference? The second prompt gives the AI a role, a target audience, a specific topic, key features to highlight, & a desired length. There's no room for guessing.
A great way to do this is to think about the 9 components of an effective prompt: role, task, steps, context, examples, persona, format, tone, & length. You don't always need all of them, but the more you can provide, the better.
Context is King (or Queen)
Providing reference text is one of the most powerful things you can do. GPT-5 has a massive context window, with some versions handling up to a million tokens. This means you can feed it entire documents, email threads, or even codebases & ask it to work with that information.
This is a game-changer for businesses. Imagine you want to create a customer support chatbot. In the past, this was a huge undertaking. Now, you can use a platform like Arsturn, which is designed for this exact purpose. Arsturn helps businesses build no-code AI chatbots trained on their own data. You can upload your FAQs, product manuals, & past customer conversations, & the AI learns it all. This way, when a customer asks a question, the chatbot doesn't just give a generic answer; it gives a response based on your actual business information. It's like having a support agent who has memorized your entire knowledge base.
This is the "high effort" part of the equation. It takes a bit of work to gather & provide the right context, but the payoff is enormous. You get answers that are not only accurate but also deeply relevant to your specific situation.
Advanced Techniques: The Real Secrets to Unlocking GPT-5
Alright, let's get into the really fun stuff. These are the techniques that the pros are using to get mind-blowing results.
Chain-of-Thought & "Let's think step by step"
This one has been around for a bit, but it's SUPERCHARGED in GPT-5. The model has been trained to "think before it answers." Simply adding the phrase "Let's think step by step" to your prompt can trigger a more sophisticated reasoning process. The AI will literally outline its thought process before giving you the final answer.
This is incredibly useful for a few reasons:
It improves accuracy: By forcing the model to reason through the problem, you reduce the chances of it jumping to a wrong conclusion.
It's transparent: You can see how the AI arrived at its answer, which helps you trust the result & spot any potential errors in its logic.
It's great for complex tasks: If you're asking the AI to do something that requires multiple logical steps, this is a must-use technique.
Few-Shot Prompting: Show, Don't Just Tell
Few-shot prompting is all about giving the AI examples of what you want. Instead of just describing the desired output, you show it. This is like giving the AI a mini-training session right within your prompt.
Let's say you want the AI to write product descriptions in a very specific style. You could say:
"Here's an example of a product description:
Product: The Morning Brew Coffee MakerDescription: Tired of bitter coffee? The Morning Brew uses a unique low-acid brewing process to deliver a perfectly smooth cup, every time. It’s simple, fast, & delicious.
Now, using that same style, write a product description for a set of noise-canceling headphones called 'The Sanctuary.'"
This is SO much more effective than just saying "write a short, punchy product description." The AI can pick up on the tone, the structure, & the level of detail from your example.
Multi-Stage Prompting: Breaking It Down
For really complex tasks, don't try to get everything in one go. Break the task down into smaller, more manageable stages.
For example, if you want the AI to write a detailed report on a new market trend, you could structure it like this:
Prompt 1: "I'm writing a report on the rise of personalized nutrition. First, create a detailed outline of the report, including an introduction, key trends, major players in the market, challenges, & a conclusion."
Prompt 2: "Great, now let's focus on the 'key trends' section. Elaborate on each of the points in your outline, providing at least one paragraph of detail for each."
Prompt 3: "Perfect. Now, write a compelling introduction for this report based on the outline & the details we've discussed."
This iterative process gives you more control & allows you to guide the AI at each step. It's more work, yes, but the quality of the final output will be leagues ahead of what you'd get from a single, massive prompt.
The "Secret" Techniques: Going from Good to Great
Ready to go even deeper down the rabbit hole? These are some of the less-common but incredibly powerful techniques that can truly transform your results.
Force Prompting & The AI Debate
This is a pretty cool, almost meta, technique I learned about. Instead of just accepting the first answer you get, you can "force prompt" the AI to do better by making it critique its own work.
Here's how it works:
Get your initial response.
Ask another AI model (or even the same one in a new chat) to act as an evaluator. You can even give it a scorecard based on things like verbal intelligence, logical consistency, & creativity.
Feed the original response & the critique back to the first AI & ask it to revise its work based on the feedback.
You can even set up an "AI debate" where you have two different models argue about the best way to answer your prompt. This process of critique & revision forces the AI to move beyond its generic, pre-programmed responses & produce something that is genuinely insightful & well-reasoned. It's a fantastic way to break through the "AI glass ceiling" & get truly exceptional content.
Step-Back Prompting
This is another technique for getting the AI to think more deeply. It involves asking the model to "take a step back" & consider the broader context or underlying principles before answering.
For instance, instead of asking "How do I solve this specific coding bug?", you could try:
"I have a bug in my Python code. Before you give me the solution, take a step back & explain the general principles of debugging this type of error in Python. What are the common pitfalls? Once you've explained that, then provide the specific code to fix my issue."
This encourages the AI to provide not just an answer, but also a framework for understanding that will help you in the future.
Negative Examples are Your Friend
We've talked about giving the AI positive examples, but it's often even more important to show it what not to do. This is especially true when you're trying to avoid common AI pitfalls like using overly flowery language or making up facts.
You could add a line to your prompt like:
"What to avoid:
Do not use clichés like 'in today's fast-paced world.'
Do not invent statistics or sources.
Do not make the tone overly formal or corporate."
By explicitly stating the failure modes, you give the AI clear guardrails, which dramatically increases your chances of getting the output you want.
Putting It All Together: A Practical Example
So, how does this all look in practice? Let's say you're a marketing manager & you want to use GPT-5 to brainstorm a new ad campaign.
The "Low Effort" Prompt:
"Give me some ideas for an ad campaign for our new running shoe."
Result: You'll probably get a list of generic, uninspired ideas like "Focus on speed" or "Show people running in beautiful locations."
The "Low Verbosity, High Effort" Prompt:
"You are an expert marketing strategist, famous for creating viral, emotionally resonant ad campaigns. Your target audience is millennial & Gen Z runners who are more motivated by community & personal achievement than by professional competition.
I am launching a new running shoe called the 'Momentum'. Its key feature is its responsive foam that makes running feel effortless, like you're floating.
Task: Generate three distinct ad campaign concepts.
Format for each concept:
Campaign Title:
Core Message (1-2 sentences):
Key Visuals (3-4 bullet points):
Social Media Slogan:
Let's think step by step to ensure each concept is unique & compelling.
What to avoid:
Do not show professional athletes.
Do not focus on winning races.
Avoid generic fitness slogans."
This prompt is concise, but it's packed with effort. It sets a role, defines the audience, provides context about the product, gives clear instructions on the format, uses a chain-of-thought trigger, & includes negative constraints. The result will be three well-thought-out, highly relevant campaign ideas that you can actually use.
The Future of Interaction
Getting good at this stuff isn't just about getting better answers from a chatbot. It's about learning how to collaborate with a powerful new kind of intelligence. As AI becomes more integrated into our work, the ability to communicate with it effectively will be a critical skill.
This is especially true for businesses looking to leverage AI for customer engagement. When you're setting up a chatbot on your website, you want it to be more than just a glorified FAQ page. You want it to be a helpful, engaging representative of your brand.
This is where a platform like Arsturn comes in. It provides the tools for businesses to have these kinds of high-quality interactions at scale. By building a no-code AI chatbot trained on your own data, you're essentially applying these "high effort" principles from the start. You're giving the AI all the context it needs to provide personalized, accurate, & helpful answers to your customers, 24/7. It's a way to build meaningful connections with your audience, boosting conversions & providing a customer experience that people will remember.
So, yeah, the "low verbosity, high effort" approach takes a bit more thought upfront. You can't just be lazy. But the results speak for themselves. You move from getting generic, often useless, responses to co-creating genuinely valuable insights, content, & solutions.
Hope this was helpful! It's a whole new world with these models, & we're all still learning. Let me know what you think, & if you have any other cool prompting secrets, I'd love to hear them.