GPT-5 Can't Make To-Do Lists in Cursor? Here's the Fix & Explanation
Z
Zack Saadioui
8/11/2025
Can't Get GPT-5 in Cursor to Make a To-Do List? Here’s What’s Going On & How to Deal With It
Hey there! So, you’re trying out the shiny new GPT-5 in Cursor, you ask it to whip up a to-do list for your project, &… crickets. Or worse, it just writes out a list in the chat window like a basic chatbot, completely ignoring the cool, integrated to-do list feature. If you’re pulling your hair out thinking you’re doing something wrong, let me just say: it’s not you, it’s the tech.
Honestly, this is one of those super frustrating moments in the fast-moving world of AI tools. You see a demo, you get excited about a feature, & then it just doesn't work as advertised. Turns out, you're not alone in this. A bunch of developers have been flagging this exact issue.
Let's dive deep into what's happening, some workarounds you can use RIGHT NOW, & what your other options are if this is a deal-breaker for you.
First Things First: Yes, It’s a Known Bug
Let's get this out of the way. The problem with GPT-5 not being able to see or write to-do items in Cursor is a confirmed bug. Across the Cursor community forums, users on both MacOS & Windows have reported that no matter how they prompt the agent, GPT-5 just doesn't seem to know the to-do list tool exists. It’s like asking someone to hand you a wrench when they don’t even know what a wrench is.
The Cursor team themselves have chimed in & said that this feature is "currently unavailable for this model" & that they're working on it. This is both good & bad news. It's good because you can stop trying to find a magical prompt that will suddenly make it work. It's bad because… well, it's a feature that doesn't work, & there's no official timeline for a fix.
Some users have even pointed out that the GPT-5 integration in general feels a bit clunky compared to other models like Claude Sonnet 4, which seems to handle features like to-do lists much more gracefully. This isn't uncommon when a new, powerful model gets integrated into an existing tool. There are often a few bumps in the road.
The Immediate Workaround: Go Old School with Markdown
So, what can you do in the meantime? One clever user on the forums suggested a pretty simple & effective workaround: just ask GPT-5 to create the to-do list in a markdown file.
Here’s how you can do it:
Create a new folder in your project, maybe call it
1
/.todos
or something similar.
When you have a task for GPT-5, prompt it like this: "Okay, I need you to implement a new user authentication flow. First, create a detailed to-do list of all the steps you'll take in a new markdown file named
1
auth-flow-todo.md
inside the
1
/.todos
folder."
This way, you still get the step-by-step plan from the AI, & you have a dedicated place to track the tasks. It's not as slick as the integrated feature, for sure, but it gets the job done & keeps your project organized. It's a practical solution until the official feature is patched.
Why Does This Kind of Bug Even Happen? A Peek Behind the Curtain
It might seem weird that a super-advanced model like GPT-5 can write complex code but can't handle a simple to-do list. Here’s the thing: it's not really about intelligence; it's about integration & tool use.
AI coding assistants like Cursor are more than just a chat window slapped onto a text editor. They are deeply woven into the fabric of the IDE. They have what's called "codebase awareness," meaning they can read your files, understand the project structure, & see the context of your code. To do this, they use a few tricks:
Vector Embeddings & Search: Your entire codebase is often indexed & converted into a format (vector embeddings) that the AI can quickly search. When you ask a question, the assistant first finds the most relevant chunks of code & "stuffs" them into the prompt it sends to the large language model (LLM). This is how it "knows" about your project.
Tool Calling: Modern LLMs can use "tools." These are essentially functions that the model can call to interact with the outside world. In Cursor's case, creating a to-do list item is a specific tool. The model needs to be "taught" or "fine-tuned" to recognize when to use this tool & what information to pass to it.
The problem with GPT-5 seems to be a breakdown in this tool-calling process. The model, for whatever reason, isn't correctly identifying the user's intent to create a to-do list & therefore isn't calling the right internal function within Cursor. This could be due to a few things:
New Model, New Training: GPT-5 is a new frontier model. The way it was trained & the way it expects to be prompted for tool use might be different from previous models like GPT-4 or Claude. The Cursor team likely needs to do some specific fine-tuning to get it to play nicely with their custom tools.
"Request Happy" Behavior: Some users have noted that GPT-5 in Cursor is very "request happy," meaning it often stops to ask for validation before proceeding. This could be a symptom of the model being overly cautious or not fully understanding the capabilities of its environment, leading it to avoid using tools it's not 100% confident about.
Integration Complexity: Integrating a new, massive LLM is a BIG undertaking. There can be all sorts of unexpected issues with how the model's output is parsed & how it interacts with the IDE's existing features. It's rarely a simple plug-and-play operation.
This is a great example of where the future of AI assistants is heading. It’s not just about raw intelligence, but about creating a seamless bridge between the AI & the tools we use every day. As an emerging standard, the Model Context Protocol (MCP) aims to standardize how AI agents discover & communicate with tools, which should hopefully make these kinds of integration bugs less common in the future.
General Troubleshooting for Cursor: Just in Case
While the to-do list issue is a specific GPT-5 bug, it's always possible that other issues could be compounding your frustration. If Cursor is feeling generally sluggish or buggy, here are a few things you can try:
Check Your Network Connection: Cursor's AI features rely on a solid connection to their servers. Sometimes, corporate proxies or VPNs can interfere. Cursor even has a built-in diagnostics tool you can run from
1
Cursor Settings > Network
. One common culprit is the blocking of HTTP/2, which you can often work around by enabling an HTTP/1.1 fallback in the settings.
Restart or Re-login: The classic "turn it off & on again." A simple restart of the Cursor application or logging out & back in can sometimes clear up weird state issues.
Check for Extension Conflicts: Like VS Code, Cursor supports extensions, & these can sometimes cause performance problems. You can use the built-in
1
Developer: Open Process Explorer
to see if a particular extension is eating up resources.
The
1
.cursorrules
File: For more advanced control, you can add a
1
.cursorrules
file to your project's root directory. This file lets you give the AI specific instructions about your project's structure, coding style, & libraries to use. While it won't fix the to-do bug, it can help the AI provide more relevant suggestions in general.
Is It Time to Look at Alternatives?
If the to-do list feature is a critical part of your workflow & you can't wait for a fix, it might be worth exploring other options. The AI coding assistant market is booming, & there are some fantastic alternatives out there.
GitHub Copilot: This is probably the most well-known AI coding assistant. It's developed by GitHub & OpenAI, so it's incredibly well-integrated, especially if you're already in the GitHub ecosystem. While its primary strength is code completion, its chat feature is quite powerful & can help with generating plans & explanations. It doesn't have a dedicated "to-do" feature like Cursor's, but its overall stability is a big plus.
Sourcegraph Cody: Cody is another excellent choice, particularly for developers working on large, complex codebases. Its main advantage is its deep understanding of your entire repository, thanks to Sourcegraph's powerful code search engine. Like Cursor, it aims for full project context, making it great for refactoring & understanding legacy code.
Tabnine: Tabnine is a strong contender that places a heavy emphasis on privacy & security. It can be self-hosted, which is a huge deal for enterprises with strict data policies. It offers great code completion & can be customized to your team's coding style. It's a solid, reliable choice for teams who want AI assistance without sending their code to the cloud.
Open Source Options: For those who like to tinker & have full control, the open-source community has some gems:
Zed: A high-performance, multiplayer code editor designed for collaboration between humans & AI.
Aider: A terminal-based assistant that lets you code with AI in a chat-like interface, with direct integration into your Git workflow.
Void: An open-source fork of VS Code that aims to provide AI-powered features with a strong focus on privacy.
A Quick Word on Business & Customer Engagement
Thinking about these AI tools also gets me thinking about how businesses are using AI to interact with their customers. A bug in a developer tool is one thing, but a bug in your customer-facing chatbot can be a real problem.
This is where having a reliable & customizable platform is key. For example, a service like Arsturn helps businesses build no-code AI chatbots that are trained on their own data. This means the chatbot knows YOUR business, YOUR products, & YOUR documentation inside & out. It's not a generic model trying to figure things out on the fly. With Arsturn, you can create a custom AI chatbot that provides instant customer support, answers questions, & engages with website visitors 24/7. It’s a great way to boost conversions & provide personalized experiences without the headaches of a buggy integration. It's about having an AI that's a true expert on your business, which is what we all want from our tools, right?
Wrapping It Up
So, to sum it all up: that annoying bug with GPT-5 & to-do lists in Cursor is real, but it's not a showstopper. The markdown file workaround is your best friend for now.
It’s a good reminder that we're still in the early days of these powerful AI-native development environments. There will be bumps, bugs, & a bit of a learning curve. The key is to be flexible, find workarounds, & keep an eye on the alternatives. The pace of innovation is insane, so what’s a frustrating bug today might be a seamlessly integrated, mind-blowing feature tomorrow.
Hope this was helpful! Let me know if you've found any other workarounds or what your favorite AI coding assistant is. It's always good to share notes on this stuff.