Why GPT-5 in Cursor Fails: Debugging the 'Can't See Code' Issue
Z
Zack Saadioui
8/11/2025
Debugging Your AI App: Why GPT-5 in Cursor Can't See or Write To-Dos
Hey everyone, hope you're doing well. I wanted to talk about something that's been a hot topic in the dev community lately: the new GPT-5 integration in Cursor. There's been a TON of buzz, with some people calling it a game-changer & others feeling pretty underwhelmed. If you've been trying to use it for your own projects, you might have run into a couple of frustrating roadblocks. Specifically, you might have noticed that GPT-5 seems to have a hard time "seeing" your entire codebase, & it's not exactly helping you create to-do lists.
Honestly, it can be a real head-scratcher. You've got this incredibly powerful AI model at your fingertips, but it feels like it's wearing a blindfold. So, what's really going on here? Let's break it down, because it turns out there are some pretty interesting reasons for these limitations.
The "Can't See" Problem: A Peek Behind the Curtain
First off, let's tackle the big one: why does it feel like GPT-5 in Cursor is flying blind? You're trying to debug a complex issue, & you need the AI to have a holistic understanding of your project, but it keeps giving you suggestions that don't make sense in the grand scheme of things. It's like asking a brilliant architect to design a new room without showing them the rest of the house.
The root of this problem lies in a concept called the "context window." Think of the context window as the AI's short-term memory. It's the amount of information the model can "see" at any given time. When you're interacting with an AI in an IDE like Cursor, the tool has to be very selective about what it shows the model. It can't just cram your entire codebase into the context window, for a few key reasons:
Cost: The bigger the context window, the more expensive it is to run the AI model. Sending thousands of lines of code back & forth for every single request would be incredibly costly, both for the developers of Cursor & for users who are using their own API keys.
Performance: Processing a massive amount of text takes time. If Cursor sent your entire project to GPT-5 every time you asked a question, you'd be waiting a LONG time for a response. The goal is to provide a seamless, real-time experience, & that means being smart about how much data is being processed.
Token Limits: AI models have a finite limit on the number of "tokens" (which are basically pieces of words) they can handle in a single prompt. While GPT-5 has a much larger context window than its predecessors, it's still not infinite. For a large project with hundreds of files, you'd quickly hit that limit.
So, how does Cursor decide what to show the AI? It uses a few clever tricks to try & provide the most relevant information without overwhelming the model. Typically, it will include:
The file you currently have open.
Any code you've specifically highlighted.
A high-level outline of your project, like a list of file & function names.
Recently accessed files.
This is a pretty smart approach, but it's not foolproof. If you're working on a bug that involves the interaction of multiple, less-frequently-used files, the AI might not have the full picture. It's like trying to solve a puzzle with half the pieces missing.
Now, here's the good news: there are a few workarounds you can try if you're running into this issue. One clever trick that's been floating around is the "two-step conversation." Here's how it works:
Start a new chat with a simple question, like "Can you see my current file?" This will get the conversation started & establish a baseline context.
In your next message, paste the entire contents of the relevant files & then ask your actual question. For some reason, this seems to bypass some of Cursor's context-saving optimizations & sends the full text to the AI.
It's a bit of a hack, but it can be surprisingly effective. You can also try using Cursor's "agent" mode, which is designed to have a more comprehensive view of your project. It's still not perfect, but it's a step in the right direction.
The "To-Do List" Conundrum: A Feature in the Making
Now, let's talk about the other big frustration: the inability to create & manage to-do lists. You're in the middle of a coding session, & you have a great idea for a new feature or a refactor that you want to tackle later. It would be amazing if you could just tell the AI, "Hey, add 'refactor the user authentication module' to my to-do list," right?
Unfortunately, this kind of task management functionality isn't really a standard feature in most AI code assistants... yet. While some tools, like CodeGPT, are starting to experiment with to-do list features, it's still pretty early days. The primary focus of these AI assistants has been on code generation, debugging, & code completion.
Here's why this is a bit more complicated than it sounds:
State Management: To manage a to-do list, the AI would need to maintain a persistent state across your entire coding session. This is a very different kind of problem than just responding to one-off prompts. It would need to be able to add, remove, & modify items on the list, & it would need to remember that list even after you've closed & reopened the IDE.
User Interface: How would this to-do list be displayed? Would it be a new panel in the IDE? A separate file? A pop-up window? Designing a user-friendly interface for this kind of feature is a challenge in itself.
Integration with Other Tools: Many developers already use dedicated project management tools like Jira, Trello, or Asana. A truly useful to-do list feature in an IDE would probably need to integrate with these tools, which adds another layer of complexity.
So, while it's a fantastic idea, it's not something that's been fully baked into most AI coding assistants just yet. It's definitely on the horizon, but for now, you'll probably have to stick with your existing task management workflow.
The GPT-5 Performance Paradox: Is It Really Better?
This brings us to a bigger, more philosophical question: is GPT-5 in Cursor actually living up to the hype? The initial search results & community discussions paint a pretty mixed picture. Some developers are blown away by its ability to generate complex code & understand high-level instructions. Others are finding it to be slow, unreliable, & prone to making strange mistakes.
There are a few reasons for this discrepancy:
It's Still New: The integration of GPT-5 into Cursor is still in its early stages. There are bound to be some bugs & performance issues that need to be ironed out. The Cursor team is likely working hard to optimize the experience, but it's a work in progress.
The "Slow" Problem: Many users have reported that GPT-5 in Cursor is noticeably slower than other models. This could be due to a number of factors, including the sheer size of the model, the way it's being accessed through the API, & the amount of data being sent back & forth.
Model-Specific Quirks: Every AI model has its own unique personality & set of quirks. GPT-5 might be better at certain tasks than others. For example, it might be a master at generating boilerplate code but struggle with more nuanced debugging tasks. Some users have found that other models, like Claude 3.5 Sonnet, are actually more reliable for their specific workflows.
It's also worth remembering that the "best" AI model is often a matter of personal preference. What works well for one developer might not work as well for another. The key is to experiment with different models & find the one that best suits your needs.
A Glimpse into the Future: Where Do We Go From Here?
So, what does all of this mean for the future of AI in software development? Are we doomed to a future of frustrating, half-blind AI assistants?
Honestly, I don't think so. What we're seeing right now is just the beginning of a new era in software development. The tools are still evolving, & there are definitely some growing pains. But the potential is HUGE.
Imagine a future where your AI assistant has a perfect, real-time understanding of your entire codebase. It can not only help you write & debug code, but it can also anticipate your needs, manage your tasks, & even collaborate with you on high-level architectural decisions. That's the future we're heading towards, & it's pretty darn exciting.
And it's not just about coding. The same AI technologies that are transforming software development are also revolutionizing other areas of business. Take customer service, for example. In the past, if a customer had a question, they had to either call a support line & wait on hold or send an email & hope for a timely response.
Now, companies can use AI-powered chatbots to provide instant, 24/7 support. And I'm not talking about those clunky, frustrating chatbots of the past. Modern AI chatbots, like the ones you can build with Arsturn, are incredibly sophisticated. You can train them on your own data, so they can provide accurate, context-aware answers to customer questions. They can even be integrated directly into your website or app, providing a seamless customer experience.
For businesses, this is a game-changer. It means you can provide better, faster customer support at a fraction of the cost. And for customers, it means no more waiting on hold or wading through endless FAQ pages. It's a win-win.
And that's just one example. The same AI technology can be used to automate all sorts of business processes, from lead generation to marketing to sales. The possibilities are pretty much endless.
So, while it might be frustrating that GPT-5 in Cursor can't see your whole codebase or write your to-do lists just yet, it's important to remember that we're still in the early days of this AI revolution. The tools are only going to get better, & the possibilities are only going to expand.
In the meantime, my advice is to keep experimenting, keep learning, & keep an open mind. And if you're looking for a way to leverage the power of AI in your own business, you should definitely check out a platform like Arsturn. It's a great way to get started with AI automation & see for yourself what a difference it can make.
Hope this was helpful! Let me know what you think. Have you been using GPT-5 in Cursor? What has your experience been like? I'd love to hear your thoughts.