8/12/2025

So, You're Having Trouble with GPT-5 Projects? You're Not Alone.

Alright, let's talk about it. The hype around GPT-5 has been HUGE. It was billed as the next giant leap, the model that would change everything. & for some things, it’s incredibly powerful. But if you’ve been trying to use the new "Projects" feature for any serious work, you might be pulling your hair out. You're getting weird errors, the AI is forgetting things you just told it, & sometimes, it feels like it's just... not trying very hard.
Honestly, it’s not just you. Turns out, a lot of the issues with the Projects feature are tied to some bigger, core problems with GPT-5 itself. It’s a classic case of the engine sputtering, not just a faulty transmission.
So, let's break down what's REALLY going on, why the Projects feature feels so flaky, & what you can actually do about it.

The Core Problem: GPT-5's "Thinking Economy" & Other Growing Pains

Before we even get to the Projects feature specifically, we have to understand the quirks of the underlying model. A lot of the frustration stems from a few key things happening behind the scenes.

1. The "Router" Is Trying to Be TOO Efficient

This is probably the biggest culprit. GPT-5 isn't just one giant model. It's a collection of specialized sub-models. When you send a prompt, a "router" decides which sub-model is best for the job. The goal is efficiency—if it's a simple question, it sends it to a faster, less "expensive" model. If it's a complex reasoning task, it's supposed to send it to the heavy-duty model.
The Problem: The router often gets it wrong. It prioritizes speed over quality, meaning your complex project queries might be getting handled by a "dumber" version of the AI. This leads to shallow, incomplete, or just plain wrong answers, even when you know a better response is in there somewhere. It feels like you've paid for a V8 engine but the car is deciding to use only two cylinders to save gas.

2. Model Drift is REAL

OpenAI is constantly updating GPT-5. While this is good for long-term improvement, it means the model you used yesterday might not be the same one you're using today. This "model drift" can break workflows that used to work perfectly. A prompt that gave you a beautifully formatted JSON output last week might suddenly start giving you a bulleted list. For long-running projects, this is a nightmare.

3. It Has a Goldfish Memory (Sometimes)

Even with a massive context window (like 128K tokens), GPT-5 can struggle with recall in very long conversations. If you're deep into a coding project & you refer back to a function you defined 20 messages ago, there's a good chance it's forgotten the details. Users have reported the model stopping halfway through summarizing large PDFs or just losing track of critical context in complex tasks. This makes the "long-running session" promise of the Projects feature feel pretty hollow.

4. The Personality Went Poof

A lot of users have noticed that GPT-5 has a colder, more robotic tone. It often gives short, overly formal, & unhelpful replies. For businesses, this is a major step back. If you're trying to develop customer-facing communications or content, the last thing you want is an AI that sounds like a disinterested robot.
This is actually where specialized AI solutions can make a HUGE difference. For example, a business wouldn’t want its customer service to have this cold tone. That's why many are turning to platforms like Arsturn, which helps businesses create custom AI chatbots trained specifically on their own data. The result is an AI that not only knows your business inside & out but also communicates in your brand's voice, providing instant, personalized, & genuinely helpful support to website visitors 24/7. It's about creating a connection, not just spitting out an answer.

The Big, Scary "Projects" Feature Bug: "conversation_tree_corrupt"

Okay, now for the main event. The single most disruptive issue with the Projects feature is a terrifying error:
1 "conversation_tree_corrupt"
.
This isn't just a small glitch. This error can make your entire project, or at least a significant conversation within it, completely inaccessible. Imagine spending hours debugging code with the AI, building up context, only to have the entire conversation disappear with an "Unable to load, retry?" message. All that work, gone in an instant.
Users have reported this happening multiple times, especially when using integrations with tools like IntelliJ. OpenAI support has acknowledged that the issue is tied to the massive demand from the GPT-5 launch, which is causing performance and availability problems. So while they are working on it, for now, it's a game of Russian Roulette with your work.

Other Major Headaches with GPT-5 Projects

Besides the conversation corruption bug, there are other significant problems users are running into:
  • Bugs in Complex Code Generation: GPT-5 is pretty good at writing small, self-contained scripts. But ask it to scaffold a larger project, & it often falls apart. It messes up variable scope, has trouble with imports, & generally can't manage the complexity of a multi-file codebase.
  • Factual Inaccuracy & Hallucinations: The model still just makes things up. It will confidently give you the wrong stock price, invent details about a story, or misremember facts from a document you uploaded. This makes it unreliable for any task that requires factual accuracy.
  • Reasoning on Demand Only: You can't assume GPT-5 will reason deeply about your problem. You have to explicitly tell it to. It often won't "think step-by-step" unless you use those exact words in your prompt.
  • Performance & Capacity Crunches: Even if you avoid the big bugs, you've probably run into slow loading times, timeouts, or being denied access to the more advanced "thinking modes" during peak hours.

Your Survival Guide: Workarounds That Actually Work

So, is it hopeless? Not entirely. While we have to wait for OpenAI to fix the core infrastructure, there are things you can do RIGHT NOW to get better results & protect your work.

For Better Responses & Deeper Thinking:

  1. Be Explicitly Demanding: This is the most important takeaway. Don't assume anything. Start your prompts with phrases that force the router to use the more powerful models.
    • "Analyze thoroughly..."
    • "Think step-by-step..."
    • "Provide detailed reasoning..."
    • "Think hard about this..."
  2. Use Custom Instructions: Go into your ChatGPT settings & set up custom instructions that tell the AI how you want it to behave by default. For example: "Always default to deep, step-by-step analysis. Prioritize correctness & thoroughness over speed."
  3. Practice "U-Shaped Prompting": To combat context loss in long projects, periodically remind the AI of the most important information. Start your prompt with the key context, ask your question, & then re-iterate the key context at the end. It feels redundant, but it works.

For Project Stability & Workflow Management:

  1. Version Your Prompts: If you have a workflow that depends on a specific output, save your prompts! Keep a library of what works. When a model update breaks your flow, you can go back to your versioned prompt & experiment with tweaks until you get it working again.
  2. Consider the API: The chat interface is where the unpredictable routing happens. If you have the technical skills, using the API gives you direct access to specific models. This offers far more control & consistency for production workflows.
  3. Demand Proof: When you ask the AI to perform an action, like calling a tool or analyzing a file, ask it to show its work. "Show me the plan first, then execute it, & then show me the completed actions against that plan." This helps prevent it from "pretending" to do something it didn't.
  4. External Backups: This one is crucial. DO NOT trust the Projects feature as your single source of truth. Regularly copy & paste important code, analysis, & conversations into an external document (Notion, a text file, anything). If the "conversation_tree_corrupt" bug hits, you won't lose everything.

When to Use a Different Tool for the Job

Here's the thing: GPT-5 is trying to be a jack-of-all-trades, but it's proving to be a master of none, especially for specialized business needs. The unreliability, robotic tone, & factual inaccuracies make it a risky choice for mission-critical tasks like customer interaction or lead generation.
This is where a purpose-built solution shines. If your goal is to engage customers on your website, you don't need a model that can write a sonnet about shoelaces. You need a model that can answer product questions accurately, capture leads effectively, & reflect your brand's personality flawlessly.
That's the entire philosophy behind Arsturn. It’s a conversational AI platform designed to help businesses build meaningful connections. Instead of relying on a generic, unpredictable model, Arsturn lets you build a no-code AI chatbot trained exclusively on your company's data—your website content, your help docs, your product catalogs. This ensures the chatbot provides personalized, accurate, & on-brand experiences every time, helping to boost conversions & build customer trust. It's about using AI as a precision tool, not a sledgehammer.

Final Thoughts

Look, GPT-5 is an incredible piece of technology, but the "Projects" feature & the model itself are clearly going through some serious growing pains. It feels like we've been given the keys to a prototype race car—it's unbelievably fast in a straight line, but the steering is loose, the brakes are unreliable, & sometimes the engine just cuts out.
By being smart with your prompts, protecting your work, & recognizing when a more specialized tool is needed, you can navigate these issues. Hopefully, OpenAI will shore up the infrastructure soon, but in the meantime, these workarounds should help you get by.
Hope this was helpful. Let me know what you think, & share any other tricks you've found that work!

Copyright © Arsturn 2025