What Is an MCP Server? A Developer's Primer for 2026
The one-sentence definition
An MCP (Model Context Protocol) server is a local program that exposes tools, resources, and prompts to an LLM client over a standardized JSON-RPC wire format, letting a model read data or take actions against the user's own machine without either side building a custom integration.
That's it. The rest of this post is the unpacking.
Why Anthropic invented MCP
Before MCP, every integration between an LLM and an external system was custom glue. ChatGPT plugins had one approach. Copilot's tool calling had another. Cursor shipped its own file-editing integration. Every vendor reinvented the same four primitives — list a thing, read a thing, do a thing, describe the tools. The result was a walled-garden per vendor and a maintenance burden for anyone building a tool.
Anthropic introduced MCP in late 2024 as a small, boring protocol that any LLM client could speak and any tool author could implement. It is deliberately JSON-RPC over a transport of your choice — stdio or HTTP — with a fixed set of methods for discovery and invocation. Nothing clever. That is the point.
Within eighteen months it went from a niche Anthropic standard to something Claude Desktop, Cursor, GitHub Copilot's agent mode, Continue, Windsurf, and Zed all support. The clients do the same MCP handshake against the same kind of server. That means a tool you write once — for reading HTTP traffic, for instance — works in every IDE.
Stdio vs HTTP transport
MCP supports two transports: stdio (the client spawns the server as a child process and talks over standard input and output) and HTTP (the server listens on a port, the client connects). The choice has real consequences.
Stdio transport is for tools that run on the same machine as the client. The MCP server is a binary or script the client launches. No network port. No authentication layer. The process boundary is the security boundary. This is the right default for local tools: file systems, databases on your laptop, HTTP proxies like Rockxy, and anything else where "the model touches your machine" is the whole point.
HTTP transport is for tools that run elsewhere. A remote database, a SaaS API with an MCP adapter, a shared team server. The server exposes a long-lived HTTP endpoint, usually with streaming responses for tool calls that take time. This transport needs authentication, TLS, and all the usual HTTP hardening — but it lets multiple clients share one server.
A sensible rule of thumb: if the tool is about the user's own machine or the user's own data, use stdio. If it is about a shared system, use HTTP. Do not use HTTP when stdio would work, because HTTP servers listening on localhost are still attack surface.
The MCP surface: tools, resources, prompts
An MCP server can expose three kinds of things. Each has a purpose.
Tools
A tool is a function the model can call. It has a name, a description, a JSON Schema for its input, and a return shape. From the model's perspective it is indistinguishable from any other function-calling interface.
{
"name": "list_flows",
"description": "List recent HTTP flows captured by the proxy.",
"inputSchema": {
"type": "object",
"properties": {
"host": { "type": "string" },
"method": { "type": "string" },
"status": { "type": "integer" },
"limit": { "type": "integer", "default": 20 }
}
}
}
Resources
A resource is a piece of content the server can hand to the model as reference material — a file, a URL, an excerpt from a database. Resources are addressed by URIs. They are closer to "context the model reads" than "function the model calls."
A filesystem MCP server might expose every file as a resource. An HTTP debugging server might expose each captured flow's response body as a resource.
Prompts
A prompt is a reusable template the server offers the client. Think of it as a saved query: "give me a vulnerability summary for this flow," parameterized. Clients surface prompts as slash commands or quick actions. Not every server bothers with prompts — they are the least-used of the three primitives.
What makes a good MCP server
Four properties separate MCP servers that stick around from MCP servers that feel like demos.
Narrow tool surface. The Rockxy MCP server exposes four tools. The Linear MCP server could expose forty — tickets, teams, projects, cycles, labels, comments — but a smaller surface is easier for the model to reason about and easier for you to audit. Fewer tools also means fewer names to collide with other MCP servers the user might have loaded.
Idempotent where possible. Read operations should be repeatable with no side effect. Write operations should accept an idempotency key or deduplicate on their own. LLMs retry. They retry when a response is late, when a network blips, when the model decides its first attempt was wrong. A replay tool that creates three duplicate requests when the model tried once is a bug.
Local-first. The default should be that an MCP server runs on the user's machine, not a vendor's cloud. Cloud MCP servers are fine when the data already lives in the cloud — a Notion MCP server is inherently cloud-backed — but a local tool with a cloud MCP wrapper is a leak.
Auditable. Users should be able to see every tool call in their client. Every MCP-aware client does this by default. Server authors should keep tool signatures obvious — no hidden side effects, no tools that do significantly more than their name implies.
Example ecosystem
Concrete examples of what different MCP servers expose:
- Filesystem MCP. Tools for
read_file,list_directory,write_file. Resources for every file in a configured root. Useful for letting an assistant read your codebase without copy-paste. - GitHub MCP. Tools for listing issues, creating PRs, reading repo metadata. Usually HTTP transport against the GitHub API with a token the user provides. Authenticated, remote, but still proxied through a local stdio shim in most clients.
- Postgres MCP. Tools for running read-only queries. Schema as a resource. Narrow surface, idempotent, local (pointing at a local DB).
- Rockxy MCP. Four tools:
list_flows,get_flow_detail,replay_request,diff_flows. Stdio transport, local-only, source-visible under AGPL-3.0. Lets Claude Desktop (and other MCP clients) read your HTTP traffic and replay requests.
Each of these is a small surface of tools, scoped to a single domain. None of them tries to be "the AI integration layer for everything." That restraint is what makes the ecosystem composable — your client loads five small servers, each doing one thing, instead of one giant server trying to do everything.
How developers use MCP day-to-day in 2026
A realistic 2026 setup: you have Claude Desktop running for research and debugging, with three MCP servers loaded — filesystem (scoped to your work directory), GitHub, and Rockxy. Your claude_desktop_config.json is maybe thirty lines.
You also have Cursor for code editing, with a project-level .cursor/mcp.json that loads the filesystem server plus one or two project-specific servers (your company's internal MCP gateway, say).
The two clients share server binaries. Installing Rockxy once makes it available in both Claude Desktop and Cursor. The configs live in different places, but the server on the other end is the same. That is the value of a shared protocol: you stop shipping integrations per-client.
Day-to-day use looks like this: you ask Claude Desktop "what does the last failing API call look like" and it queries Rockxy. You ask Cursor to "rewrite this function so the test passes" and it invokes the filesystem server to read and edit. You never write boilerplate to teach either client about either tool.
Where it's going next
A few trends are visible at the start of 2026.
Multi-agent handoff. MCP servers are starting to be used not just by interactive chat clients but by background agents that call each other. One agent triages incoming issues, hands promising ones to another agent that reads the codebase, which hands a draft PR to a third that runs tests. Every handoff is an MCP call. The protocol was not designed for this specifically, but it works fine for it.
First-party IDE integrations. Xcode, Visual Studio, JetBrains IDEs, and others are either shipping or prototyping MCP client support. Within a year, every major IDE will speak MCP the way every major IDE now speaks LSP.
Server marketplaces. Distribution is still the weak link. Most MCP servers are installed by editing a JSON file by hand. A marketplace layer — like an app store for MCP servers — is the obvious next step, and several clients are building something along those lines.
Auth and capability models. The current MCP spec is light on authorization. You either trust an MCP server or you don't load it. As the ecosystem grows, per-tool capability grants (the model can call list_flows but not replay_request) will become standard.
If you want to see a small, well-scoped MCP server in action, the Rockxy setup guide takes three minutes and gives you a concrete example of what the four primitives look like wired into a real debugging workflow.