8/13/2025

So, GPT-5 is here, & the dust is still settling. It was hyped as this massive leap, a "PhD-level expert" ready to solve all our problems. & for some things, it's admittedly pretty impressive on paper. But here's the funny thing that's been bubbling up in forums, on Reddit, & across social media – a lot of the people who use these models the most, the real power users, are... well, not exactly over the moon.
It's less a feeling of revolutionary change & more a collective, slightly disappointed sigh. People are even throwing around terms like "AI shrinkflation," where you feel like you're getting less even though you're paying the same. So, if you're in that camp, feeling a bit let down by GPT-5's personality or performance, you're probably asking a pretty important question: can you just... not use it? Can you go back to the good old days of GPT-4o or other models?
The short answer is: YES, you absolutely can. But it's a bit more complicated than just flipping a switch. Let's break it all down.

The Great Model Picker Disappearance (& Reappearance)

Probably the biggest source of frustration with the initial GPT-5 rollout was the disappearance of the model picker in ChatGPT. Power users LOVED having that little dropdown menu. It meant you had control. You could pick the super-smart, top-tier model for a complex coding problem, then switch to a faster, lighter version for a quick email draft. It was about using the right tool for the right job.
When GPT-5 launched, that control vanished. OpenAI's idea was to have an automatic "router" that would decide for you which version of the model was best for your query. The problem? A lot of users felt this router was, to be gentle, not great. They reported getting shorter, less detailed, & sometimes just plain dumber answers, even for complex prompts. It felt like a downgrade disguised as an upgrade, possibly to save on costs.
The backlash was swift & loud. People were genuinely upset, talking about how it disrupted their workflows. & to OpenAI's credit, they listened. They brought back the model picker, giving users access to legacy models once again.
So, how do you get it back?
  • You Need a Paid Plan: Here's the catch. Access to these legacy models isn't for free users. You'll need to be on a ChatGPT Plus ($20/month) or Pro ($200/month) plan.
  • Enable in Settings: If you're a subscriber, you can go into your ChatGPT settings on the web browser, & you should see an option to "Show legacy models" or "Show additional models." Toggle that on, & the next time you start a new chat, you'll see the familiar model picker, allowing you to select from options like GPT-4o, o3, & others depending on your subscription tier.
It's a bit of a hoop to jump through, but for those who rely on the specific performance or "personality" of older models, it's a HUGE relief. CEO Sam Altman has even promised that if they ever decide to deprecate GPT-4o for good, they'll give users plenty of notice this time.

Why Are People Sticking with Older Models Anyway?

This whole situation begs the question: if GPT-5 is supposed to be the latest & greatest, why are so many people looking backward? It's not just about nostalgia for old chatbots. There are some very practical reasons.
  • Personality & "Vibe": This one is a bit subjective, but it's a common complaint. Some users find GPT-5 to be "sterile" or "corporate" in its responses. They miss the "warmer" or more natural conversational style of GPT-4o. When you're spending hours a day interacting with an AI, that "vibe" really matters.
  • Workflow Reliability: Many developers & writers had built specific workflows & prompts that were finely tuned to GPT-4. When GPT-5 came along, those workflows broke. The new model would interpret prompts differently or fail at tasks the older model handled with ease. One user on Reddit mentioned that Claude was able to fix bugs that GPT-5 had actually introduced into their code.
  • The "Thinking" Problem: Ironically, for a model that's supposed to be smarter, some find GPT-5 gets stuck "thinking" for too long, only to produce a mediocre result. For fast-paced tasks, a slightly less "smart" but much faster model is often preferable.
  • Specialized Strengths: Different models are just better at different things. Power users are realizing that a "one-size-fits-all" approach isn't the future. The real power comes from having a toolkit of specialized models.

Your "Post-GPT-5" Toolkit: The Growing World of Alternatives

The good news is, if you're feeling let down by GPT-5 or just want more options, the market for AI models is EXPLODING. We're moving away from a world where one model dominates everything. Here are some of the top alternatives people are turning to:
For Creative & "Human-Like" Writing: Anthropic's Claude
If you're finding GPT-5's writing to be a bit robotic, you HAVE to check out Claude. It's consistently praised for its more natural, nuanced, & less repetitive writing style. For creative tasks, brainstorming, or writing content where tone & personality are key, a lot of people are making Claude their go-to.
For Coding & Technical Tasks: Claude & DeepSeek
While GPT-5 has impressive coding benchmarks, some developers in the trenches find it slow & clunky. They're turning to models like Claude for its reliability on complex coding tasks. Another rising star is DeepSeek, which is often mentioned as a powerful—and free—alternative for coding. Some developers have switched to it completely.
For Research & Data Analysis: Google's Gemini
This is where Google's ecosystem really shines. Gemini is becoming a powerhouse for research because of its massive context window & its integration with Google Drive. The ability to point it at a bunch of PDFs or documents & have it synthesize the information is a game-changer for researchers, academics, & anyone who needs to wrangle large amounts of data.
For Sourced & Up-to-Date Info: Perplexity
If you need an AI that acts more like a super-powered search engine, Perplexity is fantastic. It's designed to provide answers with citations & sources, which is critical when you need to verify information & avoid the "hallucinations" that can plague other models.
For a More Conversational Feel: Microsoft Copilot
Even though Copilot is powered by OpenAI's models, Microsoft has fine-tuned it differently. Some users, especially those who use the voice mode, find it more conversational & less robotic than ChatGPT. It's a subtle difference, but it makes a big impact on the user experience.

The Rise of the Custom AI Chatbot

Here's where things get REALLY interesting, especially for businesses. The frustrations with a one-size-fits-all model like GPT-5 highlight a growing need for specialization. Businesses are realizing that an off-the-shelf AI, no matter how powerful, doesn't know the specifics of their products, their customers, or their internal processes.
This is where platforms like Arsturn come into play. Instead of relying on a generic model that knows a little bit about everything, businesses need a solution that knows a LOT about their specific world. Arsturn helps businesses build no-code AI chatbots that are trained on their own data.
Imagine a customer service chatbot on your website. You don't want it giving generic answers based on the entire internet. You want it to answer questions based on your product manuals, your shipping policies, & your FAQs. That's what a custom AI does. It provides instant, accurate support 24/7 because it's an expert in your business, not just an expert in general knowledge. This kind of specialized tool is becoming essential for boosting conversions & providing the kind of personalized customer experiences that build loyalty.

So, What's the Takeaway?

Honestly, it's a pretty exciting time to be using AI. The initial disappointment with GPT-5 has actually sparked a really important conversation. It's pushing users to explore a wider range of tools & forcing a move away from AI monoculture.
You absolutely do not have to use GPT-5 if you don't like it. You can pay to access the legacy OpenAI models you know & love, or you can dive into the growing ecosystem of powerful alternatives. The future of AI isn't about finding the one "master" model. It's about building a personalized toolkit of specialized AIs that are perfect for what YOU need to do. Whether that's using Claude for writing, DeepSeek for coding, or building a custom chatbot with a platform like Arsturn for your business, the power is shifting back to the user.
Hope this was helpful! Let me know what you think & what your experience has been.

Arsturn.com/
Claim your chatbot

Copyright © Arsturn 2025