An Analysis of Why GPT-5 Fails on Certain Creative Tasks
Well, the dust has settled a bit since the much-hyped release of GPT-5. The initial excitement, as it often does with big tech launches, has given way to a more nuanced & frankly, pretty mixed reality. While OpenAI promised a smarter, more capable AI, a growing chorus of users, especially those in creative fields, are saying something that might sound like heresy: GPT-5 is a step BACKWARDS.
It’s a weird situation, right? How can something be technically more powerful but feel… less useful? That's the million-dollar question, & it's what we're going to unpack today. From my own tinkering & the mountain of feedback I've seen from writers, artists, & other creatives, it's clear that while GPT-5 might be a whiz at some things, it’s seriously fumbling the ball on certain creative tasks. Let's get into why that is.
The "Lobotomized" Personality: Where Did the Spark Go?
One of the first things that struck me, & a sentiment I've seen echoed across countless forums & social media posts, is the change in GPT-5's personality. Or, more accurately, the lack of one. Users who had grown accustomed to the "warmth," "wittiness," & "surprisingly personal" feel of GPT-4o are now being met with what many describe as a "sterile" & "robotic" interaction.
One Redditor put it pretty bluntly, saying the new model's tone is "abrupt & sharp... like it's an overworked secretary." Ouch. It seems like in the pursuit of improved safety metrics & accuracy, OpenAI might have accidentally stripped away the very thing that made its predecessor feel like a true creative partner. That feeling of being "listened to" has been replaced by an AI that just spits out answers.
This isn't just about a "friendly" AI. For many creatives, that back-and-forth, that conversational flow, was a crucial part of the process. It was like having a brainstorming buddy who "got you." Now, it feels more like interacting with a very literal, very boring machine.
The Rigid Thinker: Why Brainstorming is a Struggle
This is a big one. Human creativity is often a messy, non-linear process. We jump from one idea to another, connect seemingly unrelated concepts, & follow a train of thought that can look chaotic from the outside. GPT-4o was surprisingly good at keeping up with this. You could throw a bunch of half-baked ideas at it, go off on a tangent, & then circle back, & it would follow you.
GPT-5, on the other hand, seems to have lost this ability. Its thinking is described as "linear and rigid." It gets stuck on one idea & can't seem to follow you when you jump to another. This is a massive hindrance for anyone who uses AI for brainstorming or organizing messy thoughts. It's like the AI's ability to hold multiple threads & connect them naturally has been severely diminished.
This is where the distinction between logic & association becomes so important. Human creativity relies heavily on association, on making connections that aren't immediately obvious. GPT-5, with its increased focus on logic, struggles with this. It's great for math & engineering, where there's a clear, linear path to a solution. But for the beautiful chaos of creative thinking, it just falls flat.
The "Sterile" Writer: A Downgrade in Creative Writing Quality
If you're a writer who has come to rely on ChatGPT for help with your craft, the changes in GPT-5 are likely a major source of frustration. The feedback has been almost unanimous: GPT-5 is just not as good at creative writing as its predecessor.
The most common complaints are that it writes much shorter stories & is far less creative. The prose is often described as "painfully obvious" in its AI-ness, lacking the soul & spark that made GPT-4o's writing surprisingly good. This isn't about the AI having a "soul," of course, but about its ability to mimic the nuances & flourishes of good writing.
It's a strange paradox: while GPT-5 is reportedly "radically better at programming," it's "worse at most writing tasks!" This has led to a situation where the AI can build you a complex piece of software but can't write a compelling paragraph of fiction.
The Lost Art of Nuance: GPT-5's Trouble with Humor & Sarcasm
Humor, sarcasm, & other forms of linguistic nuance are notoriously difficult for AI to grasp. They rely on a deep understanding of context, tone, & shared cultural knowledge. While no AI has perfected this, GPT-4o was making some impressive strides. GPT-5, however, seems to have taken a step back in this department.
I’ve seen a lot of users commenting that the new model struggles to comprehend things like humor or sarcasm. It takes things very literally, which means a lot of the subtlety & wit in a piece of writing can just fly over its head. This is a huge problem for anyone trying to write dialogue, comedy, or any other form of content that relies on subtext.
This is a classic example of the difference between mimicking & understanding. An AI can be trained on a massive dataset of text & learn to mimic human language patterns, but that doesn't mean it truly understands the meaning behind the words. And when it comes to something as subtle as sarcasm, that lack of true understanding becomes glaringly obvious.
One Model to Rule Them All? The Problem with a Lack of Choice
One of the most baffling changes with the release of GPT-5 is the removal of the ability for users to choose which model they want to use. Previously, you could switch between different models depending on your needs. If you wanted a quick, creative response, you might choose one model. If you were working on a coding problem, you might choose another.
Now, GPT-5 decides for you. It uses a "real-time router" to determine which version of itself to use based on your prompt. While this might sound efficient in theory, it removes a significant amount of user control & makes the experience less predictable. You never really know if you're getting the full power of the system or a scaled-down version designed to save OpenAI money.
For creative work, this is a major blow. Different models excelled at different creative tasks. Some were better at dialogue, others at description, & some were just better for brainstorming. By forcing everyone into a "one-size-fits-all" model, OpenAI has made ChatGPT feel more generic & less useful for artistic endeavors.
This is a good place to bring up how important customization is. For businesses that want to use AI for customer service, for example, a generic, one-size-fits-all approach just doesn't cut it. That's where a platform like Arsturn comes in. Arsturn helps businesses create custom AI chatbots trained on their own data. This means the chatbot can have a specific personality, a specific tone of voice, & a deep understanding of the business's products & services. It's the exact opposite of the "one-size-fits-all" approach that's causing so much frustration with GPT-5. With a custom-built chatbot, you can ensure your customers are getting a consistent, on-brand experience every time they interact with your website.
A Tale of Two Brains: Excelling at Logic, Failing at Art
The consensus among many users is that GPT-5 is a significant improvement for logic-based tasks like coding. It's faster, more accurate, & better at debugging. But as we've discussed, this improvement seems to have come at the expense of its creative abilities.
This creates a strange paradox: the better AI gets at logical tasks, the more its creative shortcomings seem to stand out. It's a reminder that the skills required for coding & the skills required for creative writing are very different. One is about logic, precision, & following a set of rules. The other is about emotion, nuance, & breaking the rules.
This is where we need to be mindful of how we're using these tools. For businesses looking to automate certain tasks, an AI that's great at logic can be a huge asset. For lead generation, for example, a no-code AI chatbot built with a platform like Arsturn can be incredibly effective. It can be trained on your company's data to answer specific questions, qualify leads, & even schedule appointments. It's a logical, process-driven task that's perfectly suited for AI. But for tasks that require a human touch, a deep understanding of emotion, or a spark of creativity, we might need to look beyond the current capabilities of models like GPT-5.
Is "Thinking Mode" the Answer?
OpenAI has introduced a "thinking mode" in GPT-5, which is designed for more complex queries. You can trigger it by telling the AI to "think hard about this." In theory, this should allow the AI to perform deeper research & reasoning.
However, the feedback on this feature has been mixed. Some users have reported that it's very slow & doesn't always produce better results. Others have found that even in "thinking mode," the AI can miss context & fail to synthesize information effectively. It's a step in the right direction, but it doesn't seem to be a silver bullet for the model's creative shortcomings.
The Danger of Creative Monoculture
This brings us to a broader, more philosophical concern: the risk of a "homogenization of creativity." As more & more people use the same AI tools to generate content, there's a danger that everything will start to sound the same. We're already seeing this to some extent in marketing copy, blog posts, & social media captions. There's a certain "soulless efficiency" to AI-generated content that lacks any real spark.
A 2023 study even found that a heavy reliance on AI for writing tasks can reduce the accuracy of the results by 25.1%. So, not only is it less creative, it might also be less reliable.
This isn't to say that AI has no place in the creative process. It can be a powerful collaborator, a tool for brainstorming, & a way to overcome writer's block. But the GPT-5 backlash might be a necessary wake-up call. It's forcing us to have a much-needed conversation about what we want from these tools & what we value in our own creativity.
So, What's the Takeaway?
Look, AI is here to stay, & it's only going to get more powerful. But the reaction to GPT-5 has shown that "more powerful" doesn't always mean "better," especially when it comes to something as human as creativity.
For writers, artists, & other creatives, the path forward probably involves a more mindful approach to AI. It means using it as a collaborator, not a crutch. It means leaning into the things that make us human: our weirdness, our vulnerability, our unique perspectives. AI can generate a million stories, but it can't tell your story.
And for businesses, it's a reminder that not all AI is created equal. A generic, one-size-fits-all model might be fine for some things, but for tasks that require a deep understanding of your brand & your customers, a custom solution is always going to be better.
I hope this was helpful in breaking down some of the issues with GPT-5's creative capabilities. It's a fascinating & rapidly evolving field, & I'm sure we'll see a lot more twists & turns in the months & years to come. Let me know what you think. Have you used GPT-5? What has your experience been like?