I am a...
Learn more
How it worksPricingFAQ
Account
May 14, 2026 · 10 min read · Cadence Editorial

Claude MCP servers explained: the 2026 working guide

claude mcp servers — Claude MCP servers explained: the 2026 working guide
Photo by [Brett Sayles](https://www.pexels.com/@brett-sayles) on [Pexels](https://www.pexels.com/photo/server-racks-on-data-center-5480781/)

title: "Claude MCP servers explained: the 2026 working guide" slug: "claude-mcp-servers-explained" metaDescription: "Claude MCP servers connect Claude Code to your tools and data. The 2026 install list, custom server in 50 LOC, and a four-step debug sequence." excerpt: "Claude MCP servers wire Claude Code into Linear, GitHub, Postgres, and your own APIs. Here's the 2026 install list, a 50-LOC custom server, and how to debug when /mcp goes red."

Claude MCP servers explained: the 2026 working guide

Claude MCP servers are small programs that expose tools and data to Claude Code over Anthropic's open Model Context Protocol. You install one (Linear, GitHub, Postgres, or your own), point Claude Code at it via ~/.claude.json or a project .mcp.json, and Claude can read tickets, run queries, or hit your APIs without copy-paste. Restart, run /mcp to confirm, and the tools show up in-session.

That is the whole story in three sentences. The rest of this post is what to install, how to write your own server in 50 lines, and what to do when /mcp shows red.

What MCP actually is (and what it isn't)

MCP, the Model Context Protocol, is an open spec Anthropic shipped in late 2024. Anthropic's own analogy is "USB for AI": one wire format that any compatible host (Claude Code, Claude Desktop, Cursor, Continue, Zed) can speak to any compatible server.

The catalog went from a few dozen public servers at the start of 2025 to more than 500 by April 2026. That growth is doing the same thing USB did to peripheral hardware: it collapses N x M integration work into N + M.

What MCP is not: a model capability. The model still calls tools the same way it always has. MCP is a transport (stdio, HTTP, or SSE) plus a schema convention. The server defines tools and resources; the client (Claude Code) discovers them and shows them to the model. If you've used Claude tool use in production directly via the Messages API, you already understand the inner loop. MCP just makes the tool definitions reusable across hosts and across projects.

The 2026 server roster worth installing

Most of the value lives in eight or nine widely-used servers. Pick the ones that match the tools your team already lives in.

Code and tickets

  • GitHub MCP: PRs, issues, repo search across your orgs. Single highest-impact install.
  • Linear MCP: read and write tickets, cycles, projects. Replaces the "let me check Linear real quick" tax.
  • Sentry MCP: pull error context, stack traces, and event counts straight into the session. Triage gets faster.

Data and observability

  • Postgres MCP: schema inspection plus ad-hoc SQL. Use a read-only role. Always.
  • Supabase MCP: same idea, with row-level security awareness.
  • Datadog/Honeycomb MCP: query metrics and traces without leaving Claude.

Productivity and comms

  • Slack MCP: post status, search threads, react.
  • Notion MCP: read pages, write summaries, append to a standup doc.
  • Google Drive MCP: search Docs and Sheets.
  • Stripe MCP: subscription lookups, customer history, refund metadata.

Browser and docs

  • Playwright MCP: drive a real browser for E2E checks and visual diffs.
  • Context7: live, version-aware library docs (so Claude doesn't hallucinate API surfaces).

A working ceiling is three to six servers. Every server's tool definitions consume context tokens, and tool definitions get loaded before your prompt does. Tool Search (auto-loading only the relevant tool defs) cuts that overhead by roughly 85% on Sonnet 4 and Opus 4 models, but the protocol-level cost is still real. Install ten servers and you've eaten your context budget before typing.

How to install and configure an MCP server

There are two paths. The CLI is faster for one-offs; direct file editing wins when the config has a lot of env vars or paths.

The CLI:

claude mcp add --transport http github https://api.githubcopilot.com/mcp/

All flags (--transport, --env, --scope) come before the server name. After the command, restart Claude Code, then run /mcp. You should see GitHub listed with its tool count.

The file:

{
  "mcpServers": {
    "linear": {
      "type": "stdio",
      "command": "npx",
      "args": ["-y", "@linear/mcp-server"],
      "env": {
        "LINEAR_API_KEY": "lin_api_..."
      }
    }
  }
}

That block goes in ~/.claude.json for user/local scope, or a project .mcp.json checked into the repo for team-shared configs.

Three scopes determine where the server loads:

ScopeFileVisible whereBest for
local~/.claude.jsonthis project, this machinepersonal credentials
project.mcp.jsonthis repo, every teammateshared servers, no secrets
user~/.claude.jsonevery project, this machineservers you want everywhere (GitHub, Context7)

Local takes precedence over project, and project over user. The pattern most teams settle on: shared .mcp.json for the servers everyone needs (GitHub, Linear, Postgres pointing at staging), and local-scope overrides for personal API keys.

Writing a custom MCP server in 50 lines

The community servers cover the common ground. Custom servers are where MCP earns its keep, because they wrap the internal APIs and tools that no one else will publish.

A working TypeScript server using @modelcontextprotocol/sdk:

import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";

const server = new Server({ name: "billing-lookup", version: "0.1.0" });

server.setRequestHandler("tools/list", async () => ({
  tools: [{
    name: "get_invoice",
    description: "Fetch invoice metadata by ID from internal billing API",
    inputSchema: {
      type: "object",
      properties: { invoiceId: { type: "string" } },
      required: ["invoiceId"]
    }
  }]
}));

server.setRequestHandler("tools/call", async (req) => {
  const { invoiceId } = req.params.arguments;
  const res = await fetch(`https://billing.internal/api/invoices/${invoiceId}`, {
    headers: { Authorization: `Bearer ${process.env.BILLING_TOKEN}` }
  });
  const data = await res.json();
  return { content: [{ type: "text", text: JSON.stringify(data, null, 2) }] };
});

await server.connect(new StdioServerTransport());

That's about 50 lines once you add error handling. Python is the same shape with the official mcp package. Ship it as an npx-installable package or a single binary, and your team adds it to .mcp.json. The same artifact works in Cursor and Continue, because the protocol is open.

What to expose: tools (functions Claude can call) and resources (read-only data Claude can fetch by URI). Tools mutate or compute. Resources just return bytes. Most internal-API wrappers only need tools.

The security model in plain language

MCP servers run on your machine or your infra. Anthropic never sees the credentials. Claude only ever sees the tool's input schema and whatever the tool returns.

That means the blast radius equals whatever the server's tools can do. A Postgres MCP with a read-only role is safe. A Postgres MCP with admin is a footgun. The same shape applies to GitHub tokens (read vs write vs admin), Stripe keys (restricted vs full), and your own custom servers (validate input; rate-limit; log).

Three habits that keep this sane:

  1. Least-privilege credentials. Read-only roles. Fine-grained tokens. Per-project keys.
  2. Per-server enable flag. Keep destructive servers off by default; toggle on when you need them. Claude Code supports this via the enabled field in config.
  3. Audit what your server returns. If your custom server hands Claude a list of customer emails, that data is now in the model's context window. Decide whether that's fine.

This connects directly to when not to use AI to write code. The same judgment applies to MCP: just because Claude can hit your prod database doesn't mean it should.

A real cross-server workflow

The killer use case isn't one server. It's three or four chained.

A weekly task: pull stale Linear tickets, summarize by owner, post to Notion, link blocking PRs.

Prompt: "Use Linear MCP to find tickets in the current cycle idle more than 14 days. Cluster by owner. Use Notion MCP to write a page tagged Engineering Standup with the cluster. Use GitHub MCP to find open PRs that mention any of those ticket IDs and append them as a 'blocking PRs' section."

What happens:

  1. Linear MCP returns 11 stale tickets with owner, last update, status.
  2. Claude clusters by owner in-context.
  3. Notion MCP creates a new page with the clustered list.
  4. GitHub MCP searches PRs across the org for the ticket IDs, finds 3 open PRs, appends them.

Total time: about 90 seconds. The manual version takes 25 minutes weekly. That delta is what AI-native teams capture every day, and it's the kind of multi-step tool calling that engineers steeped in agent loops reach for instinctively.

Steps

If you want to go from zero to a working Claude Code MCP setup, run this in order.

  1. Install Claude Code and confirm it runs. npm i -g @anthropic-ai/claude-code, then claude --version. You should see a 2.x build.
  2. Install your first MCP server (GitHub). Run claude mcp add --transport http github https://api.githubcopilot.com/mcp/. Authenticate via the OAuth flow it prints.
  3. Configure mcp.json for the rest. Open ~/.claude.json (user/local scope) or create .mcp.json in your repo root (project scope). Add the mcpServers block with command, args, and env for each server. Commit .mcp.json if it has no secrets; gitignore it if it does.
  4. Restart Claude Code and verify with /mcp. Quit and relaunch. In a session, run /mcp. Each connected server appears with its tool count. If a server is missing or red, it failed to start.
  5. Use a tool in a real prompt. Try: "Use the GitHub MCP to list my five most recently merged PRs." Claude should call list_pull_requests and return the answer inline.
  6. Debug if /mcp shows red. Read the failure reason. Run the server's start command in a terminal and capture stderr. Check that env vars are passed via the config's env block. Bump MCP_TIMEOUT=15000 if startup is slow on a cold npm cache. Common gotcha: forgetting to npm install -g the package the config references.

Debugging when /mcp shows red

The four-step sequence above is the loop. A few extra notes from production use:

  • Logs. Claude Desktop writes to ~/Library/Logs/Claude/ on macOS. Claude Code prints MCP server stderr to its own log; check the path your claude --doctor output suggests.
  • Auth failures. OAuth-based servers (Linear, Sentry, Notion) sometimes silently fail when the refresh token expires. Re-run the claude mcp add flow.
  • Tool def explosion. If /mcp shows green but Claude is sluggish, count your active tools. Above 80 or so, even Tool Search has trouble keeping latency reasonable. Disable the servers you don't need this session.
  • Schema mismatches. Custom servers crash when Claude sends an argument the schema doesn't validate. Tighten your inputSchema, log the raw input, and you'll find the typo in a minute.

When in doubt, the same discipline that powers a good LLM eval suite helps here: make the failure reproducible, capture the inputs, and iterate.

What this means for hiring engineers in 2026

Knowing MCP isn't a specialty anymore. It's a baseline for any engineer working in Claude Code, Cursor, or Continue daily. If you're hiring, the question isn't "have you used MCP?" The question is "show me a custom server you've written, and walk me through your .mcp.json."

Every engineer on Cadence is AI-native by default. The platform's voice interview specifically scores Claude Code, Cursor, and Copilot fluency, plus the prompt-as-spec discipline and multi-step prompt-ladder thinking that wiring up MCP servers requires. 50 out of 100 unlocks bookings; the median engineer scores higher. There is no non-AI-native option on Cadence, because in 2026 that distinction stopped meaning anything.

If you're trying to figure out whether your candidates have these reflexes, our AI engineering interview questions post has a category covering tool fluency that pairs well with this one. If you'd rather skip the loop entirely, you can book a Cadence engineer on a 48-hour free trial and see for yourself how they wire MCP into their first day.

Want to ship MCP-driven workflows next week instead of next quarter? Cadence shortlists vetted engineers in 2 minutes, with a 48-hour free trial and weekly billing. Book your first engineer at the junior ($500/wk), mid ($1k/wk), senior ($1.5k/wk), or lead ($2k/wk) tier and ship.

FAQ

What's the difference between MCP and a Claude tool definition?

A Claude tool is a function schema you pass inline in a single Messages API request. An MCP server packages many tools, runs as a long-lived process on your machine, and any MCP-aware client (Claude Code, Cursor, Continue, Zed) can use it. Same underlying tool-call loop; different lifecycle.

Can I use MCP servers with models other than Claude?

Yes. As of 2026, Cursor, Continue, Zed, and several open-source agents speak MCP. The protocol is open and not Claude-specific. The same custom server you write for Claude Code works in Cursor without changes.

How many MCP servers should I have installed?

Three to six is the working sweet spot. Each server's tool definitions consume context tokens before your prompt loads. Ten servers will measurably hurt latency and quality, even with Tool Search dynamically loading only what's relevant.

Is it safe to install community MCP servers?

Treat them like any third-party CLI tool. Read the source if it touches credentials. Prefer servers with recent commits and meaningful star counts. Run with the least-privileged credential possible. The MCP server runs on your machine with your API keys, so the trust bar is the same as installing an npm package globally.

Can I write an MCP server in Python?

Yes. Anthropic ships an official Python SDK alongside the TypeScript one. A working server is roughly 50 lines either way. Pick whichever language your team is faster in; the protocol behaves identically.

All posts