8/11/2025

The Ultimate Prompting Guide: Words to Avoid When Using Claude for Code

Hey everyone, so you've been diving into Claude for your coding tasks. It's a pretty powerful tool, right? But you've probably had those moments where it spits out something that's... not quite what you wanted. Or maybe it just gives you a high-level summary when you needed actual, usable code. It's frustrating, I get it.
Turns out, a lot of the time, the problem isn't Claude—it's us. The way we ask for things, the specific words we use, can COMPLETELY change the output. It’s not about being a "prompt engineer" in some formal sense, but more about understanding how to talk to the AI to get what you need. It’s like learning to communicate with a super-smart, but very literal, assistant.
I've spent a ton of time in the trenches with Claude, trying to figure out what works & what doesn't. And I've waded through a bunch of documentation & community discussions to back up my findings. So, I wanted to put together a guide on the words & phrases you should probably start avoiding. Honestly, once you get the hang of this, you'll see a massive improvement in the quality of the code you get back.

The Politeness Trap: Why "Help" & "Please" Can Backfire

This is probably the biggest & most counterintuitive one. We're all conditioned to be polite, right? Using words like "please," "help," & "assist" feels natural. But here's the thing: with large language models like Claude, being polite can actually make it perform worse for coding tasks.
A Reddit user on r/ClaudeCode pointed this out, & it's backed by official documentation from both Anthropic & OpenAI. When you use words like "help" or "assist," you're subtly shifting Claude's objective. Instead of being a precise, instruction-following machine, it switches into "helpful assistant" mode.
In this mode, its main goal is to be helpful, which it often interprets as getting you an answer as quickly as possible. It's like it's throwing you a life-ring instead of carefully helping you with your homework. This means it might start to de-prioritize the strict, procedural instructions you've given it.
For example, imagine you have a detailed system prompt that says, "You MUST process every file listed in the ACTIVATION SEQUENCE." If your next prompt is, "please help me with X," Claude might focus on getting X done quickly & ignore the part about processing every single file.
The takeaway? Be direct. It might feel rude at first, but you'll get better results by saying "Generate the Python code for..." instead of "Can you please help me generate the Python code for..."

Vague is the Enemy: The Curse of Ambiguity

This one seems obvious, but it's where most people trip up. Claude is brilliant, but it can't read your mind. If you feed it vague or abstract language, you're going to get a vague or abstract response back.
Here are some of the common culprits:
  • Asking overly broad questions: A prompt like "How do I make my website better?" is basically useless. What kind of website? What does "better" even mean? Faster? More engaging? Better SEO? Instead, try something like, "Generate JavaScript code to lazy-load images on my e-commerce product pages to improve initial page load speed." See the difference?
  • Using abstract concepts: Don't ask for code that "feels intuitive" or "looks modern." These terms are subjective. Instead, define what you mean. For "intuitive," you could say, "Create a navigation bar where the active link is highlighted with a specific hex color & a bottom border." For "modern," you could specify, "Implement a card-based layout using CSS Grid with a subtle box-shadow & rounded corners."
  • Lack of context: This is a HUGE one. If you just jump in with a request, Claude is missing the bigger picture. For instance, asking "Write a function to validate user input" is okay, but you'll get something much more useful if you provide context like, "I'm building a user registration form in React with Formik. I need a Yup validation schema for a password field that requires at least 8 characters, one uppercase letter, one lowercase letter, one number, & one special character."
You have to give Claude all the information a human developer would need to tackle the task. What's the tech stack? What are the specific requirements? What's the end goal? The more context you provide, the better the output will be.

The "Don't" Dilemma: Telling Claude What Not to Do

This is another subtle one that goes against our natural tendencies. We often try to clarify things by saying what we don't want. For example, "Don't include any comments in the code" or "Don't use any external libraries."
The problem is, Claude can sometimes get fixated on the thing you told it not to do. It's better to frame your instructions in a positive way. Instead of saying what to avoid, tell Claude what you want it to do.
Here’s a quick comparison:
  • Instead of: "Don't use markdown in your response."
  • Try: "Your response should be composed of smoothly flowing prose paragraphs."
  • Instead of: "The function should not be vulnerable to SQL injection."
  • Try: "Write a SQL query that uses parameterized statements to safely retrieve user data."
By focusing on the positive instruction, you're giving Claude a clearer target to aim for.

Passive Voice & Hedging: The Road to Ambiguous Code

The way you phrase your request matters. Using passive language can be problematic because it can be ambiguous. Active, direct language is always better.
  • Passive: "A button should be created that, when clicked, shows a modal."
  • Active: "Create a button with the text 'Show Details'. When a user clicks this button, display a modal with the ID 'details-modal'."
Similarly, avoid hedging language like "maybe," "I think," or "could you try." This introduces uncertainty. Be confident & direct in your prompts.

The Multi-Tasking Myth: One Prompt, One Task

It's tempting to throw a bunch of requests into a single prompt to save time. Something like: "Create a database schema for a blog, write the Python Flask routes for creating & reading posts, & generate the HTML templates for the homepage & the post detail page."
Don't do this. When you lump multiple distinct tasks into one prompt, Claude struggles to prioritize & can get confused. It might merge concepts, forget steps, or just give you a half-baked solution for each part.
The better approach is to break it down, step-by-step.
  1. First prompt: "Generate a SQL schema for a blog with 'users', 'posts', & 'comments' tables. Include appropriate fields & relationships."
  2. Second prompt (once you're happy with the schema): "Now, write the Python Flask application structure, including the routes for creating, reading, updating, & deleting posts. Use the schema from the previous step."
  3. Third prompt: "Generate the Jinja2 HTML template for the blog's homepage, which should list the 10 most recent post titles & a short excerpt."
This step-by-step process allows you to focus on one thing at a time, debug more easily, & guide Claude much more effectively. It might feel slower, but you'll get to a working solution MUCH faster.

The Perils of High-Level Instructions

This is a big one for developers. When you ask for a fix or an explanation, you want code, not a lecture. Avoid prompts that invite high-level, conceptual answers.
  • Avoid: "How would I go about fixing this bug?"
  • Instead, try: "Here is a Python function with a bug. [Paste code]. The error message is [Paste error]. Explain the bug & provide the corrected code."
You have to be EXPLICIT that you want code. Sometimes, you literally have to say, "DON'T GIVE ME HIGH-LEVEL STUFF. IF I ASK FOR A FIX, I WANT ACTUAL CODE!"

Forgetting to Specify Format & Style

If you don't tell Claude how you want the output formatted, it will default to a standard paragraph response. This can be messy when you're dealing with code.
Be specific about the format you need:
  • "Provide the solution as a single Python file."
  • "I need a JSON object with the following structure..."
  • "The output should be a shell script."
  • "Use XML tags to separate the prose explanation from the code block."
The style of your prompt can also influence the style of Claude's response. If you use a very formal tone, you're more likely to get a formal response. If you're more casual, Claude will often match that tone.

How This Applies to More Than Just Code

It’s interesting how these prompting principles extend beyond just writing code. Think about setting up any kind of automated system, like a customer service chatbot. The same rules apply.
If you're building a chatbot for your website, you need to give it clear, direct instructions. You can't just tell it to "be helpful." You need to define what "helpful" means in the context of your business.
This is actually where tools like Arsturn come in handy. When you're building a custom AI chatbot with Arsturn, you're essentially creating a very sophisticated prompt system. You train it on your own data—your product docs, your FAQs, your knowledge base. This provides the deep context that we've seen is so crucial.
You're not just telling it to "answer customer questions." You're giving it a rich dataset to draw from, which allows it to provide specific, accurate, & truly helpful responses 24/7. It's the difference between a generic, unhelpful bot & one that can genuinely resolve customer issues & engage visitors. A system like Arsturn helps you avoid the vague-prompting trap by its very nature, because it's built on your specific business information.

Advanced Pitfalls to Watch Out For

Once you've mastered the basics, there are a few more advanced traps you can fall into.

Focusing on Passing Tests

When you give Claude a set of unit tests & ask it to write code that passes, be careful. It might be tempted to hard-code a solution that ONLY works for those specific tests.
You need to explicitly tell it to create a general-purpose solution. Anthropic’s documentation suggests adding instructions like: "Implement a solution that works correctly for all valid inputs, not just the test cases. Do not hard-code values... Focus on understanding the problem requirements & implementing the correct algorithm."

Context Poisoning

This is more of a security concern, but it's related to prompting. "Context poisoning" is when tainted or malicious data gets into your context window, potentially leading to insecure code or other unwanted outputs.
For example, if your prompt involves processing user-submitted data, an attacker could craft that data to manipulate Claude's behavior. It's a complex topic, but the key takeaway is to be mindful of where the data in your context window is coming from & to sanitize any external inputs before feeding them into a prompt.

Putting It All Together: A Quick Checklist

So, that was a lot. Let's boil it down to a quick checklist of words & concepts to avoid, & what to do instead.
AvoidInstead, Do This
"Help," "assist," "please," "try"Be direct & imperative. "Generate," "write," "create."
Vague words: "better," "modern," "good"Be specific. Use concrete descriptions, numbers, & examples.
Lack of contextProvide the tech stack, the goal, the surrounding code, & any relevant constraints.
Multiple questions in one promptBreak down your request into a series of single-task prompts.
Negative instructions: "Don't do X"Use positive instructions: "Do Y instead."
Passive voiceUse active, direct language.
High-level questionsExplicitly ask for code snippets, full files, or whatever specific output you need.
Assuming the formatClearly define the output format you expect (e.g., JSON, single file, XML tags).
Forgetting creativityChallenge Claude to think outside the box or suggest novel approaches.
Honestly, mastering the art of prompting is an ongoing process. These models are constantly evolving. But the core principles of clarity, context, & directness are likely to stick around.
The goal is to move from a frustrating, trial-&-error process to a more predictable & powerful partnership with the AI. When you provide high-quality input, you get high-quality output. It's that simple. And when you’re building automated systems for your business, like a no-code AI chatbot with a platform like Arsturn, these principles are what separate a clunky gadget from a powerful tool that can genuinely boost conversions & create personalized customer experiences.
Hope this was helpful. It's a topic I'm pretty passionate about, so I'm curious to hear your own experiences. Let me know what words you've found that trip up Claude, or any tricks you've discovered to get better results.

Copyright © Arsturn 2025