8/13/2025

So, You're Not a Fan of GPT-5's New Personalities? Here's How to Work With (or Around) Them.

Hey there. So, the new GPT-5 is out, & if you’ve spent any time with it, you’ve probably noticed the new "personalities." OpenAI rolled out these pre-formed personas—Cynic, Robot, Listener, & Nerd—as part of their big August 2025 update. The idea, it seems, was to give users more control over the tone & style of their chats. A simple toggle to get a sarcastic response or a super-nerdy, detailed one. Sounds good on paper, right?
Well, if you're reading this, you've probably discovered it's not quite that simple.
For a lot of us, especially writers, marketers, developers, or anyone who relied on the AI's previously more nuanced and adaptable voice, this update felt… weird. Almost like a downgrade. Reddit threads popped up with titles like "GPT-5 is worse. No one wanted preformed personalities." People started complaining that the new model felt "colder" & "bland," a far cry from the "warmer" & more creative GPT-4o they had grown accustomed to. The backlash was so immediate that OpenAI's CEO, Sam Altman, had to publicly address it, even bringing back access to older models for paying subscribers.
Here’s the thing: while these personalities might be a fun gimmick for casual queries, they can be a real headache when you need the AI to adopt a specific voice—your voice, or your brand's voice. It feels like trying to paint a masterpiece with a set of four pre-mixed colors. It's limiting.
But don't cancel your subscription just yet. Turns out, you’re not stuck with the "Cynic" or the "Robot." With a bit of know-how, you can bend GPT-5 to your will, override its default settings, & get back that control you thought you lost. It all comes down to mastering the art of the prompt. This isn't just about asking questions anymore; it's about becoming an AI director.
In this guide, we're going to dive deep into why these pre-formed personalities exist, why they fall short for professional use, & most importantly, share a whole toolkit of advanced prompting techniques to help you craft the exact AI voice you need.

The GPT-5 Personality Paradox: An Upgrade That Felt Like a Downgrade

Let's be fair to OpenAI for a second. The launch of GPT-5 was meant to be a big step forward in user experience. The goal was to unify the different models (like GPT-4o, o3, etc.) into a single, smarter system that could automatically figure out if your query needed a quick, simple answer or a deeper, more reasoned "thinking" process. The personalities were likely part of this simplification. Instead of a user having to write "act like a sarcastic critic," they could just click the "Cynic" button.
Sam Altman even mentioned that the move was, in part, a reaction to user feedback & a step towards greater user control. However, the initial execution missed the mark for a significant chunk of the user base. The "autoswitcher" that was supposed to seamlessly route queries apparently malfunctioned at launch, leading to inconsistent performance.
But the bigger issue was the feel of the AI itself. Users who had spent months, even years, honing their prompting skills with GPT-4o felt like their creative partner had been replaced by a less imaginative, more rigid assistant. The "warmth" was gone, replaced by a set of predictable, one-dimensional personas. For writers, this meant the loss of a unique voice; metaphors became blunted, & vivid imagery was replaced with generic descriptions.
This whole episode highlighted a crucial tension in AI development: the balance between making AI easy for the masses & providing the powerful, flexible tools that professionals need. Altman himself seemed to acknowledge this, stating, "One learning for us from the past few days is we really just need to get to a world with more per-user customization of model personality."
So, while OpenAI works on a "warmer" GPT-5 & figures out the future of hyper-customization, where does that leave us? It leaves us needing to take matters into our own hands.

Why One-Size-Fits-All Personalities Just Don't Cut It

The core problem with pre-set personalities is that they are, by definition, generic. A "Nerd" personality is OpenAI's interpretation of what a nerd sounds like. A "Robot" personality is their idea of being blunt & efficient. But what if your brand's voice is nerdy but also witty & empathetic? What if you need a robotic tone but with a hint of dark humor?
This is where the limitations of large language models (LLMs) really show. At their core, LLMs are probabilistic models, or as some critics have called them, "stochastic parrots." They are incredibly sophisticated pattern-matchers, trained on vast amounts of text from the internet. They predict the next most likely word in a sequence based on the context you give them. When you select a pre-formed personality, you're essentially loading a very strong, very generic context before your prompt even begins. This generic context can easily overpower the nuances of your own instructions.
For businesses, this is a HUGE problem. Your brand voice is your company's personality in words. It's how customers recognize you, connect with you, & build trust. In a world increasingly filled with AI-generated content, a unique & authentic voice is a massive competitive advantage. Relying on a pre-set personality risks making your content sound just like everyone else's.
Imagine a customer support interaction. You've spent years cultivating a brand image that is helpful, friendly, & slightly informal. If your support chatbot is suddenly forced into a "Robot" or "Cynic" mode, it can be jarring & damaging to the customer relationship. This is where the need for true customization becomes non-negotiable.

Your Toolkit for Taking Back Control: Advanced Prompt Engineering

Alright, enough about the problem. Let's get to the solution. If you want to break free from the four pre-packaged personalities, you need to become a master of prompt engineering. This means crafting your inputs in a way that gives the AI such specific & powerful instructions that it has no choice but to ignore its default persona & adopt the one you've designed.
Here are some of the most effective techniques, from the simple to the super advanced.

1. Role Prompting: The Foundation of Persona Crafting

This is the most fundamental technique, but it’s often overlooked. You need to explicitly tell the AI who it is. Don't just ask it to write something; tell it to be someone. The key is to be incredibly detailed in your role description.
Basic Prompt:
1 Write a welcome email for a new user.
Advanced Role Prompt:
1 You are "Sparky," the friendly & slightly quirky onboarding guide for our app, "Creative Spark." Your personality is encouraging, enthusiastic, & full of creative metaphors. You NEVER use corporate jargon. Your goal is to make the new user feel excited & empowered, not overwhelmed. Your tone is like a helpful friend who is a creative genius. Now, write a welcome email for a new user named Alex.
See the difference? The second prompt gives the AI a name, a personality, a list of traits, a clear "anti-trait" (no corporate jargon), a goal, & a tone. This detailed persona is a much stronger signal than the default "Listener" or "Nerd" personality.

2. Few-Shot & One-Shot Prompting: Show, Don't Just Tell

Sometimes, just describing the persona isn't enough. You need to give the AI examples of what you want. This is called "few-shot" (multiple examples) or "one-shot" (one example) prompting. You're basically showing it the pattern you want it to follow.
Advanced Prompt with Few-Shot Learning:
1 You are a marketing copywriter for a luxury, eco-friendly coffee brand. Your tone is sophisticated, minimalist, & focused on sensory details. Here are some examples of our copy:
1 Example 1: "Morning's first light, captured in a cup. Our Sunrise roast from the slopes of Sierra Nevada."
1 Example 2: "Deep, dark, & devoted to the earth. The rich soil of the Oaxacan highlands, in liquid form."
1 Example 3: "A whisper of jasmine on the breeze. Our lightest roast, for moments of quiet reflection."
1 Now, using this exact style, write a short social media post for our new "Midnight Velvet" dark roast.
By providing concrete examples, you're giving the model a very clear stylistic target to hit. It will analyze the structure, vocabulary, & rhythm of your examples & apply it to the new task, effectively overriding its built-in personality.

3. Chain-of-Thought & Meta-Prompting: Deconstructing the Task

For more complex outputs, you can't just throw a single prompt at the AI & hope for the best. You need to guide its "thinking" process.
Chain-of-Thought (CoT) Prompting involves asking the AI to break down a problem into steps & "think out loud." This forces it to follow a logical path that you define, rather than relying on its pre-set conversational style.
Meta-Prompting is a two-step process where your first prompt generates a better, more specific prompt, which you then use for the final output. It's like using the AI to build your own tools.
Meta-Prompting Example:
1 Your first task is to act as a world-class prompt engineering expert. I need to write a blog post that is both informative & has a very specific, cynical-but-humorous tone, similar to the writer H.L. Mencken. Please generate the PERFECT prompt for me to give to a large language model to accomplish this. The prompt should include a detailed role, tone instructions, formatting guidelines, & rules for what to avoid.
The AI would then generate a detailed prompt that you can copy, paste, & use for your actual blog post creation, ensuring you get the exact tone you were looking for.

4. Style Unbundling & Retrieval-Augmented Generation (RAG): The Pro-Level Moves

These are more advanced techniques but are incredibly powerful for creating a truly unique voice.
Style Unbundling is where you identify the core components of a style you like & then "bundle" them back together in a prompt. For example, instead of just saying "write like Steve Jobs," you'd say: "Adopt a writing style with these specific characteristics: 1. Use short, confident, declarative sentences. 2. Start with a visionary, high-level statement. 3. Use simple, direct language, avoiding technical jargon. 4. Build excitement by repeating key phrases. 5. End with a strong, memorable call to action."
Retrieval-Augmented Generation (RAG) is a game-changer for businesses. This technique involves connecting the AI to an external source of information—like your company's knowledge base, brand style guide, or past marketing materials. When the AI generates a response, it first retrieves relevant information from your documents, ensuring the output is not only on-brand but also factually accurate according to your own data. This is where you move from just prompting to building a true AI system.

For Businesses, Prompting Isn't Enough: You Need a Custom-Trained AI

Here’s the reality for any serious business: while these advanced prompting techniques are fantastic for individual tasks & power users, they aren't a scalable solution for an entire organization. You can't expect every employee in your customer service department to become a master prompt engineer just to get the company's tone right. You need consistency. You need reliability.
This is precisely where platforms like Arsturn come in.
Arsturn is designed to solve this exact problem. It’s a no-code platform that helps businesses build their own custom AI chatbots that are trained on their specific data. Instead of relying on a generic model with pre-formed personalities, you can create an AI that is a true extension of your brand.
With Arsturn, you can upload your website content, product documentation, brand guidelines, & past customer conversations. The platform uses this information to build a conversational AI that intrinsically understands your business. This means your chatbot can:
  • Provide instant, 24/7 customer support using a voice that is PERFECTLY aligned with your brand.
  • Answer detailed questions about your products & services with pinpoint accuracy.
  • Engage with website visitors in a way that feels authentic & personal, not robotic.
  • Generate qualified leads by having meaningful conversations that guide potential customers through the sales funnel.
Essentially, Arsturn helps you move beyond simply prompting an AI & into building an AI that works as a dedicated, on-brand team member. It's the ultimate workaround for the limitations of generic models like GPT-5, giving you the power to create a truly personalized customer experience without needing to write a single line of code.

The Future is Personal (and Deeply Customizable)

The backlash to GPT-5's personalities & Sam Altman's subsequent comments point to a clear trend: the future of AI is all about personalization. We're moving away from a one-size-fits-all model towards a world where every user can shape their AI to fit their specific needs, preferences, & personality.
We can expect to see more granular controls, the ability to upload our own style guides directly into the AI's memory, & maybe even AI models that can learn & adapt to our individual communication styles over time. For businesses, this means the tools to create a unique & consistent AI brand voice will become even more accessible & powerful.

So, what's the takeaway?

Look, GPT-5's pre-formed personalities might be a bit of a misstep for professional users, but they don't have to be a dead end. By embracing advanced prompt engineering, you can take back the reins & steer the AI in any direction you choose. Define its role, give it examples, & guide its thinking.
And if you're a business looking for a more permanent & scalable solution, remember that platforms like Arsturn exist to help you build a truly custom AI assistant that embodies your brand's unique voice.
It’s an exciting time in AI. The tools are getting more powerful every day, & the ability to craft truly unique & personal AI experiences is finally within our reach.
Hope this was helpful. Let me know what you think & what prompting tricks you've discovered

Copyright © Arsturn 2025