8/12/2025

Let's be honest. Your MCP server logs are probably useless. Not because logging is a bad practice, but because most of what we're told to log from our MCP servers is just noise. It’s a firehose of data that tells you what happened, but rarely why it mattered, or what you can do about it. It’s a classic case of having tons of data but zero actual information.
Honestly, I’ve seen it time & time again. People get excited about the idea of the Model-centric Protocol (MCP). They hear about a future where AI agents can seamlessly interact with tools & services, so they rush to set up a dozen MCP servers. They get their GitHub MCP, their Docker MCP, their weather MCP (because, why not?), & a whole bunch of others. They feel like they're building a sophisticated AI ecosystem.
But then reality hits. After a few weeks of tinkering, they find themselves using maybe... three or four of them. The rest are just sitting there, burning tokens & adding complexity for no real benefit. It turns out that a lot of these MCP servers are just slower, clunkier versions of tools we already have. They're glorified API wrappers that add latency without adding much value.
So, what's the deal? Why are so many MCP servers falling flat, & how do we turn them from useless novelties into genuinely powerful tools for our AI agents? Let's dive in.

The Big Problem: Why Most MCP Servers Are Just Plain Bad

Before we can fix the problem, we need to understand what makes so many MCP servers so ineffective. It's not just about bad code; it's about a fundamental misunderstanding of what they're supposed to do.

They're Just Slow Wrappers Around Existing APIs

One of the biggest complaints you'll see from developers who have actually tried to build a comprehensive MCP setup is that many servers are nothing more than thin wrappers for existing APIs. Someone takes the Slack API, puts an MCP server in front of it, & suddenly you have a "Slack MCP." The problem? Using the API directly in a script is often faster & more efficient. The MCP server just adds a layer of abstraction that introduces latency—sometimes 2-3 seconds per operation—without providing any new capabilities.
The terminal, in many ways, is the ultimate MCP. It's a direct, powerful interface for interacting with systems. If your MCP server is just mirroring command-line interface (CLI) commands, it’s probably not helping. It's just a less efficient way to do something an agent could already do. The key is to build MCPs that do things agents can't easily do on their own, or provide access to services they otherwise can't reach.

The "All-or-Nothing" Access Model is a Security Nightmare

Another HUGE issue is permissions. When you give an AI agent, like Claude, access to an MCP server from a major provider, you're often giving it the keys to the entire kingdom. It's an all-or-nothing proposition. The agent gets access to every single tool available on that server, both the non-destructive "read" operations & the destructive "write" or "delete" operations.
Let's be real, we don't trust LLMs enough yet to just let them run wild with full destructive permissions. One Reddit user learned this the hard way when their AI "helpfully" updated a production config file. Yikes. This is a massive risk, & it forces developers to be overly cautious, which in turn limits the utility of the MCPs they build. You can't unlock the full potential of an AI agent if you're terrified it's going to accidentally burn down your production environment.

They Don't Focus on User Intent

This is probably the most critical point. Most MCP servers are built around tools, not intent. They expose a set of functions—like
1 create_file
,
1 read_repo
, or
1 post_message
—but they don't understand the user's ultimate goal. A user doesn't think, "I need to execute the
1 create_app
command." They think, "I want to build an app that looks like this."
A truly useful MCP server is one that helps an agent bridge that gap. The agent should know that a user's intent is to "create an app" & that there's a specific MCP for that purpose. This is a scenario where the task is too complex or nuanced to be handled by existing CLIs or the model's general knowledge. The MCP provides a specialized capability that's aligned with a high-level user goal. When we focus on mirroring CLIs, we're thinking like machines. When we focus on user intent, we're building tools that actually help humans (and their AI assistants) get things done.
This very philosophy of focusing on user intent over raw functionality is what makes modern AI tools so powerful. For instance, think about the customer service space. A poorly designed chatbot is like a bad MCP server—it just gives you a list of commands & gets stuck if you deviate. But a well-designed system, like the custom AI chatbots you can create with Arsturn, is built around understanding customer intent. Arsturn helps businesses build no-code AI chatbots trained on their own data, which allows the bot to understand what a customer is really asking for. It's not just about keyword matching; it's about providing a personalized, helpful experience that solves the customer's problem. This approach boosts conversions & builds meaningful connections because it's designed around the user's needs, not the system's limitations.

Making MCPs That Actually Work: A Practical Guide

Okay, so we've ragged on bad MCPs enough. How do we build good ones? It's not about having more servers; it's about having the right servers, built in the right way. Turns out, the developers in the trenches have figured out a few things.

1. Start with "Read-Only First"

This is rule number one, and it's non-negotiable. NEVER give an MCP server write access until you've used it in read-only mode for at least a month. This simple rule forces you to think about what's genuinely useful without introducing unnecessary risk.
A read-only approach is perfect for tasks like:
  • Code Reviews: "Review this pull request for obvious issues."
  • Issue Management: "What PRs need my review?" or "Create a GitHub issue from this bug report."
  • Documentation Lookups: This is a game-changer. One of the most praised MCPs is one that provides up-to-date documentation for libraries directly within the AI's context. A developer can ask, "How do I handle file uploads in Next.js 14?" and the agent can pull the latest docs through the MCP, saving the developer from constantly switching between tabs. This alone can save 30 minutes a day.
Once you've proven the value of the MCP in read-only mode, you can then selectively & carefully consider adding write capabilities. But start safe.

2. Embrace Spec-Driven Development

A lot of people, when they need to build a new service, just dive straight into the code. This is often a mistake, especially with something as complex as an MCP server. The vast majority of a software's life is spent in maintenance, not the initial build. If you just use a codegen tool to spin up your MCP server, guess what? You now own that code, forever.
A much better approach is what's called "spec-driven development." This means you write your API specification first, before you write a single line of implementation code. Your API becomes the product.
Here’s why this is so powerful for MCP development:
  • Clarity & Design: It forces you to think deeply about the design of your server before you get bogged down in the details. What tools should it have? What data will it need? How will it respond to requests?
  • Unblock Your Team: Once you have a spec, you can use tools to create mock servers. A mock server gives you working endpoints that return example data without having a real backend. This is HUGE. It means your front-end team can start building against the MCP, & your QA team can start writing tests, all before the backend is even finished. It decouples your development process & speeds everything up.
  • Selective Access: This approach also solves the "all-or-nothing" permission problem. Instead of giving an agent access to a whole server, you can be much more granular. You can pick & choose specific endpoints from your spec & say, "Hey LLM, use these five tools from these three different specs to accomplish your task." This gives you fine-grained control & dramatically reduces the security risk.

3. Focus on Data Flow & Combining Strengths

The most revolutionary MCPs aren't just tools; they are critical components in a larger data flow. It’s not about the individual tools themselves, but about how they work together. A truly effective setup combines the best of both worlds: deterministic code for reliable operations & intelligent agents for decision-making.
Think about what agents are good at: understanding language, making decisions, handling ambiguity. Think about what code is good at: performing precise, deterministic tasks. A great MCP leverages both.
For example, an agent might interpret a user's vague request like, "Consolidate the latest sales reports & give me a summary." The agent can then use an MCP to perform the deterministic steps:
  1. Access the sales database (a service the agent can't reach directly).
  2. Pull the relevant reports based on specific criteria.
  3. Run a script to format & aggregate the data.
The MCP handles the "code" part, & the agent handles the "intelligence" part. This is a powerful combination that goes way beyond just wrapping a single API endpoint.

4. Choose the Right Agent Architecture

When you're building your MCP, you also have a choice in how the AI component is integrated. There are two main models: sampling & embedded agents.
  • Sampling: This is when the MCP server's decision-making process is reviewable by the client (or a human user). The MCP might propose an action, but it needs to be approved before it's executed. This is the best choice when you need user oversight, when the context is complex, or when the decisions have high stakes. It gives you flexibility & user control.
  • Embedded Agents: This is when the MCP has its own specialized AI model embedded within it. This allows the MCP to make decisions autonomously without constant back-and-forth with the client. This is great for performance (it eliminates network roundtrips) & for using highly specialized, domain-specific models that are optimized for a particular task.
The choice depends entirely on your needs. Do you need a human in the loop? Go with sampling. Do you need a high-performance, autonomous tool? An embedded agent might be better.

So, What's the Takeaway?

The hype around MCP is real, but it’s also led to a lot of wasted effort. The reality is that MCP isn't some magic wand. It's a protocol that gives AI agents a standardized way to access tools & data. That's it. It's not revolutionary in itself, but it is genuinely helpful when done right.
My advice? Stop thinking about building a massive, all-encompassing MCP ecosystem. Instead, start small. Find one real, nagging problem in your workflow that an AI could help with if it just had the right tool.
  • Is it constantly looking up documentation? Build a read-only doc-retrieval MCP.
  • Is it manually summarizing reports? Build an MCP that can access & process that data.
  • Is it managing a complex deployment process? Build an MCP that orchestrates those steps.
Build one thing that solves one problem. Use it for a month. See if it actually makes your life better. Then, & only then, decide if you need another one.
And as you build, remember the core principles. Focus on user intent, not just tool-wrapping. Start with read-only access. Use spec-driven development to design before you build. This thoughtful approach is how you create tools that are genuinely useful, not just cool demos. It's how you move from a pile of useless logs to a system that provides real, actionable insights.
I hope this was helpful. It's a topic I'm pretty passionate about because I see so much potential being wasted. Let me know what you think. Have you had similar experiences with MCPs? What's worked for you?

Copyright © Arsturn 2025