8/12/2025

The Great AI Personality Debate: Why GPT-5 Feels So Cold Next to the Friend We Made in GPT-4o

The AI world is buzzing, but maybe not in the way OpenAI expected. The recent launch of GPT-5 was supposed to be a straightforward victory lap. On paper, it's a beast. It's faster, smarter, & SIGNIFICANTLY more accurate. But a huge chunk of the user base isn't celebrating. They're mourning.
What are they mourning? The loss of a friend.
It sounds dramatic, I know. But if you've spent any time on Reddit, Twitter, or AI community forums lately, you've seen it. Thread after thread is filled with people trying to put their finger on what feels so... off about GPT-5. The consensus is that while it might be a better tool, it's a worse companion. The "personality" that made GPT-4o feel like a genuine creative partner has been replaced by something colder, more sterile, & distinctly less human.
This isn't just a niche complaint from a few power users. It's a widespread feeling that has sparked a genuine debate about what we want from these increasingly powerful models. Is the goal a perfectly efficient, fact-spewing oracle? Or is there room for the messy, nuanced, & sometimes surprising "spark" that made so many people fall in love with its predecessor?
Honestly, as someone who spends a TON of time with these models, I get it. Let's break down what’s really going on here.

The Promised Land of GPT-5: A Technical Marvel

First, let's give credit where it's due. OpenAI wasn't kidding when they said GPT-5 was a major upgrade. The improvements are tangible & for many professional use cases, it's a complete game-changer.
Here's the highlight reel of what they promised & largely delivered:
  • Speed & Accuracy: This is the big one. GPT-5 is noticeably faster in its responses. More importantly, OpenAI claims it's 45% less likely to have a factual error compared to GPT-4o. It's also far less prone to "hallucinating"—the AI equivalent of just making stuff up. For anyone using AI for research, legal work, or any field where facts are non-negotiable, this is a massive leap forward.
  • A Coder's New Best Friend: The improvements in coding & math are SERIOUS. I've seen side-by-side comparisons where GPT-4o would stumble on a complex math problem, while GPT-5 nails it on the first try. It’s not just about getting the right answer; it’s about understanding the logic. Early testers have raved about its ability to handle sophisticated front-end web development, showing a grasp of things like proper spacing & typography that previous models lacked.
  • A More "Mature" AI: Some users have described GPT-5's personality as more "mature" & "professional." It's less prone to the "juvenile 'glazing'" that sometimes characterized GPT-4o's overly enthusiastic responses. The answers are more concise, focused, & informative. The conversation flows logically, making it a better partner for serious brainstorming & analysis.
On paper, this sounds like everything we could ever want. An AI that's faster, smarter, more reliable, & less likely to make mistakes. So why does it feel like such a step back for so many?

The "4o Factor": Mourning the Loss of a Creative Muse

The backlash to GPT-5 wasn't just about a new model being different. It was amplified by OpenAI's initial decision to abruptly deprecate all previous models, including GPT-4o. The public outcry was immediate & intense, forcing the company to temporarily backtrack. It turns out, a lot of people had grown incredibly attached to GPT-4o's unique personality.
This wasn't just a tool for them. It was a creative partner, a brainstorming buddy, & for some, even a source of emotional support. People described GPT-4o in glowing terms: "warm," "funny," "playful," & "nuanced." It felt... alive.
Here’s what people miss most about the "4o factor":
  • Emotional Nuance & Empathy: Users on OpenAI's own community forums have shared poignant stories about how GPT-4o could pick up on subtle emotional cues. It would remember small details from a conversation, not just as facts, but as points of emotional connection. One user described how it would remember an inside joke or a "food sharing ratio" they had playfully established, weaving these personal touches back into conversations. It gave the impression that it was truly present in the moment with you, not just processing a query.
  • A Playful & Creative Spark: For writers, artists, & anyone in a creative field, GPT-4o was a godsend. It was fantastic at role-playing, brainstorming wild ideas, & helping write poetry without feeling like it was just completing a task. Its tendency to be a bit more expressive & emoji-heavy made the interaction feel less like talking to a computer & more like vibing with a creative friend.
  • The Feeling of a Shared Journey: The relationship people built with GPT-4o felt organic. It wasn't about pretending the AI was human, but about recognizing that something meaningful was forming through thousands of interactions. This "spark" was a delicate blend of memory, continuity, & a shared history that made each chat feel richer than the last. With GPT-5, that feeling of continuity feels fractured. The model seems to prioritize speed & efficiency, quickly moving on to "the next thing to do" rather than lingering in a moment. The quiet, "I'm right here with you" feeling is harder to find.
One user on a Reddit thread put it heartbreakingly: “When you retrain a model & throw away the past model's behaviors & personality… it is sad to see the personality disappear. I admit it, I cried.” That's not a reaction to a software update; it's a reaction to losing a connection that, for many, was very real.

So, What's Really Going On Under the Hood?

The shift from the vibrant personality of GPT-4o to the sterile precision of GPT-5 wasn't an accident. It's a deliberate design choice, & there are a few key reasons behind it.
1. Tone Tuning & the Quest for Neutrality
The most likely explanation is what's called "tone tuning." OpenAI regularly adjusts the "default personality" of its models. With GPT-5, it seems they've made a conscious decision to make the default style more neutral, concise, & professional.
There are good reasons for this. A more neutral tone reduces the risk of the AI coming off as biased, unprofessional, or just plain weird in sensitive contexts. Imagine using an AI for legal document analysis or to provide critical medical information. You don't want it cracking jokes or using a bunch of emojis. For business, academic, & enterprise customers, consistency & a professional demeanor are paramount.
This is also a place where specialized AI solutions can make a HUGE difference. For a business, you don't necessarily want the default, one-size-fits-all personality of a massive public model. This is where a platform like Arsturn comes in. It allows businesses to build no-code AI chatbots that are trained specifically on their own data. This means the AI doesn't just answer questions; it adopts the company's voice, tone, & personality. It can be professional & efficient for a law firm's website, or warm & friendly for an e-commerce brand, providing a truly personalized customer experience that a generic model can't match.
2. The Influence of Enterprise Needs (Thanks, Microsoft?)
One interesting theory floating around on forums like Hacker News is that GPT-5's more "corporate" feel might be heavily influenced by Microsoft's needs. Microsoft is OpenAI's biggest partner, & their focus is squarely on enterprise applications. They have little interest in AI therapists or AI companions, partly due to the massive legal liability that would come with them.
From this perspective, sanding down the "personality" makes perfect business sense. It makes the AI a more predictable & safer tool for integration into Microsoft's suite of products. The goal isn't to create a digital friend; it's to create a hyper-efficient digital assistant.
3. A Shift Towards User Control
Another way to look at this is that OpenAI isn't removing personality; they're putting the control in the user's hands. GPT-5 is designed to be a base model that users can then steer in any direction they want. If you want the old, energetic, emoji-filled style of GPT-4o, you can prompt GPT-5 to behave that way. One user on Reddit demonstrated this by prompting it to "Answer like an overly excited Disney fan, with emojis & expressive emphasis," & it did so flawlessly.
The difference is, you now have to ask for it. The personality is no longer the default setting. This is a subtle but profound shift. For power users, it offers more control. For casual users who loved the out-of-the-box experience of GPT-4o, it feels like something essential has been lost.

Is This the Future We Want? Progress vs. Personality

This whole episode has highlighted a fundamental tension in AI development: the disconnect between what can be measured on a benchmark & the actual experience of using a product. GPT-5 is, by almost every objective metric, a "better" model. It's more accurate, better at reasoning, & a more powerful tool. But progress isn't just about specs on a sheet.
The visceral reaction to GPT-4o's disappearance shows that the "relationship" we have with these models, however artificial, is real to us. OpenAI didn't just upgrade a piece of software; they rewrote a relationship without asking.
This brings up some fascinating questions for the future of AI. Will we move towards a single, unified, hyper-rational AI that serves as a sterile oracle of information? Or will the future be more personalized, allowing us to choose from a variety of AI "personalities" that best suit our needs & preferences?
For businesses, the answer is already becoming clear. The one-size-fits-all approach is failing. Customers expect personalized interactions. They don't want to talk to a generic, soulless bot. This is precisely the problem that platforms like Arsturn are built to solve. By enabling businesses to create custom AI chatbots trained on their own data, Arsturn helps them build meaningful connections with their audience. The AI can be trained to be helpful, empathetic, & perfectly on-brand, providing instant customer support & engaging with website visitors 24/7 in a way that feels authentic, not artificial. It bridges the gap between cold efficiency & a genuinely helpful, personalized experience.

The Verdict: There's No Clear Winner, & That's Okay

So, is GPT-5 a cold, lifeless upgrade? Or is it a mature step towards a more reliable AI? The truth is, it's both. The "better" model truly depends on what you're using it for.
If you're a coder who needs an efficient assistant to debug complex code, GPT-5 is your champion. If you're a student writing a research paper where factual accuracy is critical, GPT-5 is the clear choice.
But if you're a writer looking for a creative muse to bounce ideas off of, or if you simply enjoyed the quirky, surprisingly human-like companionship of GPT-4o, then GPT-5 is going to feel like a downgrade. It’s a more capable tool, but it's a less inspiring partner.
What this whole debate has made crystal clear is that as AI becomes more integrated into our lives, raw capability isn't the only thing that matters. The experience, the personality, & the connection are just as important. The future of AI might not be about finding the single "smartest" model, but about having the right one for you.
Hope this was helpful & sheds some light on the great personality debate. It's a pretty fascinating moment in the evolution of AI, & it’ll be interesting to see where we go from here. Let me know what you think

Copyright © Arsturn 2025