8/11/2025

GPT-5 Is Here, & Some Power Users Are Already Over It. Here's Why.

Well, it finally happened. After what felt like an eternity of hype, speculation, & cryptic tweets from Sam Altman, GPT-5 is officially out in the wild. The launch was billed as a massive leap forward, a "PhD-level expert" in your pocket. & honestly, in some ways, it is pretty impressive. The benchmarks show it's technically better on some specific tasks.
But here’s the thing… a growing number of power users, the folks who spend hours a day wrestling with these models for coding, writing, & complex problem-solving, are… well, they’re not thrilled. In fact, a lot of them are downright frustrated.
The general sentiment floating around online forums & social media isn't one of revolutionary excitement, but of a weird, collective sigh of disappointment. It feels less like a true upgrade & more like a step sideways, or for some, even a step back. People are talking about "AI shrinkflation," where you pay the same but get less.
So, what's the deal? Why are some of the most dedicated AI users already looking for greener pastures? I've been digging through the chatter, the reviews, & the hot takes, & it boils down to a few key frustrations.

Frustration #1: The "One-Size-Fits-All" Approach & The Missing Model Picker

This is probably the biggest one. With the GPT-5 rollout, OpenAI yanked the model picker from ChatGPT. Before, you could choose between different models, like the super-smart GPT-4o or the faster, more efficient o3 or o4 mini for simpler tasks. Power users LOVED this. It gave them control. They could pick the right tool for the job.
Now, it’s just GPT-5. OpenAI says a "router" automatically decides which version of the model to use for your query, whether it’s the full-power version or a lighter, faster one. The problem? A lot of people feel like this router is, to put it bluntly, not very good. They're finding that even for complex tasks, they're getting shorter, less detailed, & frankly, dumber responses.
It feels like a cost-cutting measure disguised as a feature. By forcing everyone through a router, OpenAI can push users towards cheaper-to-run models, saving themselves a ton of money on GPU costs. But for the user, it just feels like a loss of control & a downgrade in quality.

What Power Users Are Doing Instead:

This is where things get interesting. Instead of just accepting the new reality, power users are building their own "model-picker" workflows using a variety of specialized tools.
  • Claude for Creative & "Human-Like" Tasks: When it comes to creative writing, brainstorming, or just getting a response that doesn't sound like a corporate robot, a lot of people are turning to Anthropic's Claude. Reddit threads are full of users who feel Claude's writing style is more natural & less repetitive than GPT-5's often "PR-talk" language. For tasks where nuance, tone, & a touch of personality are important, Claude is becoming the go-to. It's often described as more "personable" & empathetic.
  • Poe as a Multi-Model Hub: Instead of being locked into one ecosystem, savvy users are flocking to platforms like Poe. Poe is pretty cool because it’s not a single model; it’s an aggregator. It lets you access & compare a whole bunch of different AI models from OpenAI, Anthropic, Google, & more, all in one place. This gives users back the control they lost with ChatGPT, allowing them to experiment & find the perfect model for any given task.

Frustration #2: The "Soulless" Responses & Lack of Personality

This one's a bit more subjective, but it's a recurring theme. A lot of users who had spent months, even years, building a rapport with GPT-4o felt like they were "losing a trusted friend." They describe GPT-5's responses as "colder," "more mechanical," & "soulless."
Even if the new model is technically more accurate on some benchmarks, the conversational experience has taken a nosedive for many. The witty banter, the creative flair, the little quirks that made GPT-4o feel like a genuine creative partner—much of that seems to be gone, replaced by shorter, more "clipped" answers. This has led to a massive backlash, so much so that OpenAI actually backpedaled & announced they would bring back GPT-4o as a selectable option for Plus subscribers. It just goes to show that for many users, performance isn't just about raw data—it's about the experience.

What Power Users Are Doing Instead:

When the goal is a more engaging & less robotic interaction, users are getting creative with their tool choices.
  • DeepSeek for Natural Language & Niche Topics: A surprising contender that keeps popping up is DeepSeek. Users report that DeepSeek's models, particularly for tasks like writing, produce incredibly natural-sounding text. One user on Reddit specifically mentioned that for niche topics like classical music or writing in Russian, DeepSeek "absolutely blows everything else out of the water." It seems to have a knack for avoiding the generic "AI style" that plagues many other models.
  • Microsoft Copilot for a More Conversational Vibe: While it's also powered by OpenAI's models, Microsoft Copilot has been fine-tuned differently. Some users find its voice mode, in particular, to be more conversational & less robotic than ChatGPT's. It’s a subtle difference, but for those who spend a lot of time "talking" to their AI, it makes a big impact.

Frustration #3: The Heavy Hand of Moderation & Censorship

This has been a long-simmering issue, but it seems to have become more pronounced with GPT-5. Users are finding that the model is often overly cautious, refusing to answer questions or engage with topics it deems sensitive. While safety is obviously important, many feel the pendulum has swung too far, leading to a frustratingly "neutered" experience.
This is especially true for creative writers who want to explore more complex or mature themes in their work. The feeling is that in the quest to be universally inoffensive, the AI has lost its edge & its ability to engage with the full spectrum of human experience.

What Power Users Are Doing Instead:

For those who need a bit more freedom & a little less hand-holding, a couple of alternatives are gaining traction.
  • Grok for an Unfiltered (and Sometimes Controversial) Take: Elon Musk's Grok is often positioned as the "free speech" alternative to ChatGPT. While it's not without its own quirks & biases, it's generally seen as less restrictive. It's trained on data from X (formerly Twitter), which gives it a more real-time, & often more raw, perspective on current events. For users who want an AI that's a little less afraid to be controversial, Grok is an increasingly popular choice.
  • Local & Open-Source Models for ULTIMATE Control: For the true power users, the ultimate solution is to move away from commercial platforms altogether. By running open-source models like Meta's Llama 3.2 on their own hardware, they get complete control over the model's behavior. This is a more technical path, for sure, but it's the only way to guarantee a truly uncensored & customizable AI experience.

Frustration #4: The Disappointing Performance on Core Tasks

Here's the real kicker. Despite the hype about GPT-5's "PhD-level" intelligence, many power users are finding that it's actually worse at some core tasks than its predecessors. There are numerous reports of GPT-5 struggling with things that GPT-4o handled with ease, like basic summarization, organizing text into tables, & even some coding tasks.
One user on Hacker News reported that for their summarization/insights API, GPT-5 was "way worse than gpt-4.1-mini," failing to follow instructions & producing objectively bad results. Another user trying to generate SQL queries found that while GPT-5 had a marginally higher median accuracy, it was also more expensive, slower, & had a lower overall success rate than older, smaller models.
This all points to a suspicion that OpenAI may have prioritized benchmark scores over real-world usability. The gains are there, but they seem to be narrow & not always relevant to the day-to-day tasks that power users rely on.

What Power Users Are Doing Instead:

When it comes to getting the job done, especially for technical tasks, power users are building a new, more specialized toolkit.
  • Claude & DeepSeek for Coding: While GPT-5's coding benchmarks look impressive, some developers are finding it slow & clunky in practice. They report that it often gets stuck "thinking" for long periods & can introduce new bugs. In contrast, many are finding that Claude is more reliable for complex coding tasks, with one user noting it was able to fix bugs that GPT-5 had introduced. DeepSeek is also frequently mentioned as a strong, free alternative for coding, with some developers switching to it entirely.
  • Gemini for Research & Data Analysis: Google's Gemini is carving out a niche for itself as a powerful research tool. Its main advantage is its massive context window & its tight integration with the Google ecosystem. The ability to point Gemini to your Google Drive & have it analyze PDFs & other documents is a HUGE deal for researchers & academics. For anyone who needs to synthesize information from large documents, Gemini is a very compelling alternative to GPT-5.
  • Perplexity for Sourced & Up-to-Date Information: If you're looking for an AI-powered search engine that provides sources for its claims, Perplexity is a fantastic tool. It's designed from the ground up to be a research assistant, allowing you to focus its searches on academic papers, YouTube, Reddit, or the entire web. This makes it a much more reliable tool for fact-checking & deep dives than ChatGPT's more general-purpose web search.

So, What's the Big Picture?

Look, GPT-5 isn't a total failure. It's an incredibly powerful piece of technology. But the initial rollout seems to have misjudged what its most dedicated users actually want. Power users crave control, flexibility, & reliable performance on the tasks that matter to them. The "one-size-fits-all," black-box approach of the new ChatGPT just isn't cutting it.
This has created a fascinating shift in the AI landscape. Instead of a single, dominant model, we're seeing the rise of a more diverse ecosystem of specialized tools. Power users are becoming AI polyglots, building custom workflows that leverage the unique strengths of different models. It's less about finding the "best" AI & more about building the best team of AIs.
And for businesses, this trend has some interesting implications. If you're relying on a single, general-purpose AI for things like customer service or lead generation, you might be missing out. The future of AI in business is likely to be more specialized. For example, instead of a generic chatbot, you might use a tool like Arsturn, which allows you to build no-code AI chatbots trained on your own data. This creates a much more personalized & effective customer experience, providing instant support & answering questions 24/7 with information that's specific to your business. It's a move away from the "one-size-fits-all" model & towards something much more tailored & powerful.
It's still early days for GPT-5, & OpenAI will undoubtedly continue to tweak & improve it. But for now, the power users have spoken. They're not waiting around for things to get better; they're actively building the future of AI for themselves, one specialized tool at a time.
Hope this was helpful. Let me know what you think.

Copyright © Arsturn 2025