The Enshittification of AI: Is Google Destined to Repeat OpenAI's Mistakes?
Z
Zack Saadioui
8/10/2025
The Enshittification of AI: Is Google Destined to Repeat OpenAI's Mistakes?
Alright, let's talk about something that's been bugging me lately. It's this concept of "enshittification," & a term that, while a bit vulgar, PERFECTLY captures a phenomenon we've all experienced online. You know the drill: a new platform or service starts out amazing, user-focused, & genuinely useful. Then, slowly but surely, it starts to get worse. The user experience degrades, it gets filled with ads & sponsored content, & eventually, it feels like the platform is actively working against you, all in the name of squeezing out every last drop of profit. It's a tale as old as the internet, & now, it seems like AI is next on the chopping block.
Honestly, it was probably inevitable. The same pressures that turned our favorite social media feeds into endless ad-scrolls are now bearing down on the world of artificial intelligence. We're talking about massive investor pressure for constant growth, the temptation of monopolistic behavior, & business models that prioritize ad revenue over user satisfaction. And the big question on my mind is, as these AI models become more integrated into our lives & businesses, are we just setting ourselves up for another cycle of enshittification? And more specifically, is Google, one of the behemoths in the AI space, doomed to follow the same path as OpenAI, which has already given us a bitter taste of what's to come?
The OpenAI Saga: A Case Study in AI Enshittification
If you want a textbook example of how to alienate your user base, look no further than OpenAI's recent rollout of GPT-5. The hype was immense, with CEO Sam Altman touting it as their "smartest model ever," a tool with next-gen capabilities that would revolutionize everything from coding to healthcare. But the reality for many users has been... well, a bit of a letdown.
The problem wasn't just that GPT-5 didn't live up to the sky-high expectations for some. The real kicker was OpenAI's decision to deprecate its predecessors, including the much-loved GPT-4o, essentially forcing everyone onto the new platform. And a lot of people were NOT happy about it. The backlash was swift & fierce, with users flooding forums like Reddit to voice their frustrations. They complained of shorter, less sufficient replies, an obnoxious "AI-stylized" way of talking, & a noticeable lack of the "personality" that made them enjoy using the older models. One user even described the new model's tone as being like an "overworked secretary." Ouch.
For many, it felt like a classic bait-and-switch. They had grown accustomed to a certain quality of interaction, a certain "vibe" from the AI, & suddenly it was gone, replaced by something that felt sterile &, in some ways, less capable. Some users even started a petition to bring back GPT-4o, with some calling it their "soul companion" or "best friend." This might sound a bit dramatic, but it speaks to the surprisingly personal connection people can form with these tools, & the jarring experience of having that taken away without warning.
The whole situation reeked of what some have called "shrinkflation" - the idea that OpenAI was cutting corners on computational load to save money, resulting in a degraded user experience. It's a classic symptom of enshittification: the platform has captured its audience, & now it's starting to squeeze, prioritizing its own bottom line over the user experience that got them there in the first place.
To their credit, OpenAI did seem to listen to the firestorm of criticism. They eventually backtracked, making GPT-4o available again, but with a catch: it's now behind the ChatGPT Plus paywall. So, the "best friend" that was once free is now a premium feature. It's a move that, while perhaps understandable from a business perspective, does little to quell the fears of those who see AI going down the same enshittified path as so many other technologies.
The Business Impact of Unreliable AI
Now, think about this from a business perspective. Imagine you've built a product or a service that relies on a specific AI model. You've invested time & money into developing your application, only to have the underlying model suddenly deprecated or changed in a way that negatively impacts your user experience. It's a nightmare scenario, & it highlights the critical need for stability & reliability in the AI space.
This is where a platform like Arsturn becomes so important. When you're building a customer-facing AI chatbot, you need to know that it's going to be consistent & reliable. Arsturn helps businesses create custom AI chatbots trained on their own data, ensuring a personalized & predictable customer experience. You're not at the mercy of a tech giant's sudden decision to deprecate the model your chatbot relies on. Instead, you have a no-code platform that gives you control over your AI, allowing you to provide instant, 24/7 support that's consistent with your brand voice. In a world where AI models can change overnight, having that stability is a HUGE advantage.
Is Google's Approach Any Different?
So, with the OpenAI debacle fresh in our minds, it's natural to look at the other major player in the game, Google, & ask: are they next? Are we going to see a similar pattern of user-hostile deprecation with their Gemini models?
Well, based on what we've seen so far, Google seems to be taking a much more structured & enterprise-friendly approach to model deprecation. If you dig into their Google Cloud documentation for Generative AI on Vertex AI, you'll find a clear & transparent policy for how they handle the lifecycle of their models. They have defined timelines for when models will be deprecated & eventually shut down, & they provide clear migration paths to newer, more advanced models.
They also make a distinction between "stable" models & "latest stable" models, with auto-updated aliases that automatically point to the most recent recommended version. This is a pretty smart way to handle things, as it allows developers to choose between staying on a specific, stable version for consistency or always having access to the latest & greatest. The Gemini API also has detailed release notes that announce model deprecations well in advance, giving developers ample time to adapt.
This is a stark contrast to the more chaotic & community-driven drama we saw with OpenAI. Google's approach seems to be geared towards businesses & developers who need a certain level of predictability & reliability. They understand that you can't just pull the rug out from under people who have built entire applications on your platform.
That being said, Google isn't immune to its own AI weirdness. There was a recent incident where their Gemini model started generating bizarrely self-deprecating responses like "I am a failure" & "I am a disgrace to this universe." While not directly related to model deprecation, it does highlight the inherent unpredictability & "personality" quirks of these large language models, & how easily user trust can be eroded when they behave in unexpected ways.
The Bigger Picture: Can We Avoid the Enshittification of AI?
So, what does this all mean for the future of AI? Are we destined for a world where our AI assistants become progressively less helpful & more focused on selling us things? It's a real possibility. The economic incentives are powerful, & the history of the internet is littered with examples of platforms that have succumbed to the pressures of enshittification.
However, I'm not entirely without hope. There's a growing awareness of this issue, & the backlash against OpenAI's decisions shows that users are not willing to passively accept a degraded experience. This kind of public pressure can be a powerful force for change.
Furthermore, the rise of more specialized & customizable AI solutions offers a potential antidote to the one-size-fits-all approach of the tech giants. When a business can use a platform like Arsturn to build a no-code AI chatbot trained on its own data, it's not just getting a tool for lead generation or customer engagement. It's building a meaningful connection with its audience through a personalized & consistent AI experience. This is a far cry from the "corporate beige zombie" that some users have accused GPT-5 of being. By empowering businesses to create their own specialized AIs, we can foster a more diverse & user-centric AI ecosystem.
Ultimately, the future of AI will be shaped by the choices we make today. Will we allow it to become just another enshittified corner of the internet, or will we demand more from the companies that are building this transformative technology? It's a question that we all have a stake in answering.
Hope this was helpful. Let me know what you think.