Ninety-seven million. That's not a vanity metric or some inflated download count from a package manager that double-counts CI pipelines. That's 97 million MCP server installs across the ecosystem as of this month. And I run 14 of them in production every day.
When Anthropic first published the Model Context Protocol spec in late 2024, the developer world split into two camps: those who thought it was just another standard destined for the graveyard next to XML-RPC, and those who saw it for what it actually was -- the missing bridge between language models and the real world. I was firmly in the second camp, and I started integrating and configuring servers before the ecosystem had even stabilized.
What MCP Actually Is (Skip the Marketing)
Strip away the blog posts and conference talks and MCP is straightforward: it's a protocol that lets AI models call external tools through a standardized interface. Think of it as USB for AI. Before USB, every peripheral needed its own driver, its own connector, its own prayer to the hardware gods. MCP does the same thing for tool calling.
An MCP server exposes a set of tools -- functions with typed inputs and outputs -- over a transport layer (stdio or HTTP/SSE). An MCP client (like Claude Code, Cursor, or any compatible host) discovers those tools and can invoke them during a conversation. The model sees the tool descriptions, decides when to use them, and the server executes the actual logic.
MCP didn't invent tool calling. It standardized it. That distinction matters more than most people realize.
Before MCP, every AI platform had its own function-calling format. OpenAI had one schema. Anthropic had another. Google had a third. If you built a tool for one, you rebuilt it for the others. MCP killed that fragmentation. Build once, connect everywhere.
Why 97M Installs Happened Faster Than Anyone Expected
Three forces converged:
- Claude Code went mainstream. When Claude Code shipped with native MCP support, every developer who used it became a potential MCP consumer. And Claude Code usage has been explosive -- it's the default coding tool for a growing segment of professional developers.
- The protocol is dead simple to implement. You can build a working MCP server in under 50 lines of TypeScript. There's no auth negotiation, no complex handshake, no certificate pinning. Stdin/stdout. JSON messages. That's it.
- The tool gap was real. Models are smart but blind. They can reason about code but can't read your database. They can plan a deployment but can't check your CI status. MCP gave them eyes and hands.
The install count also reflects something deeper: developers are done building one-off integrations. Every team I talk to has some brittle Bash script that shells out to their API, parses the JSON with jq, and pipes it back to the model. MCP replaces that entire class of duct tape.
The 14 Servers I Run (And Why Each One Exists)
I didn't set out to run 14 MCP servers. Each one was added to my stack because of a concrete problem I hit while building production systems. Two of them I built from scratch. The rest are open-source or community-built servers that I configured, integrated, and wired into my daily workflow. Here are the ones worth talking about:
Contract Validator (Custom-Built)
Built this from scratch for a client in logistics. Their legal team was reviewing freight contracts manually -- 200+ pages each, three contracts per week. The MCP server connects to their document store, extracts clauses, cross-references against their standard terms, and flags deviations. What took two days now takes 20 minutes.
Framework Developer Agent (Custom-Built)
My second custom MCP server, purpose-built for scaffolding and generating code within specific framework patterns. This one saves hours on repetitive project setup and boilerplate generation.
PAL: Multi-Model Orchestration
PAL is an open-source tool I adopted and configured for my workflow. It routes requests to different AI models -- Gemini, OpenAI, Claude, local models -- based on the task. Need a code review? It sends to multiple models and synthesizes consensus. Need a second opinion on architecture? It fans out to three models and returns structured disagreements.
The key insight: no single model is best at everything. Claude is exceptional at code. Gemini handles massive context windows. GPT-4o is fast for simple tasks. PAL lets me use the right model for each job without switching tools.
PostgreSQL Direct Access
The community-built PostgreSQL MCP server gives nine tools for direct SQL access from the model. Unrestricted mode for development, read-only mode for production introspection. I configured this early and use it constantly -- instead of writing throwaway API endpoints to answer "how many users signed up this week," I just ask Claude and it queries directly.
The Others
Gmail integration (19 tools), n8n workflow automation (20 tools), filesystem operations, semantic memory storage, Supabase cloud ops, Playwright browser testing, Cloudflare management, web search, code search via grep.app, and more. All community or vendor-built servers that I integrated and configured for my specific infrastructure. Combined: 170+ tools accessible from a single Claude Code session.
Building Your First MCP Server (It's Simpler Than You Think)
Here's a minimal MCP server in TypeScript that exposes a single tool:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const server = new McpServer({
name: "my-server",
version: "1.0.0",
});
server.tool(
"get_weather",
"Get current weather for a city",
{ city: z.string().describe("City name") },
async ({ city }) => {
const response = await fetch(
`https://api.weather.example/v1?q=${city}`
);
const data = await response.json();
return {
content: [{
type: "text",
text: `${city}: ${data.temp}F, ${data.condition}`,
}],
};
}
);
const transport = new StdioServerTransport();
await server.connect(transport);
That's a fully functional MCP server. Register it in your Claude Code config, and the model can check the weather mid-conversation. The z.string() schema tells the model what parameters the tool expects. The description helps the model decide when to use it.
The real power shows up when you compose multiple tools. My PostgreSQL MCP server has 9 tools -- query, describe_table, list_tables, explain_plan, and so on. The model chains them naturally: "Let me check the schema first, then write a query, then explain the execution plan."
What Most People Get Wrong About MCP
After spending 18 months in this ecosystem, here's what I see people consistently misunderstand:
- "MCP is just function calling." Function calling is the mechanism. MCP is the ecosystem. The difference is like saying "HTTP is just request-response" -- technically true, completely missing the point. MCP gives you discovery, standardization, composability, and a growing marketplace of pre-built servers.
- "I need an MCP server for everything." No. If your tool is a single API call that returns a string, a simple system prompt with the API docs might be enough. MCP shines when you have stateful operations, complex multi-step tools, or need the same tool across multiple AI hosts.
- "Security doesn't matter in development." It does. MCP servers can read your filesystem, query your database, send emails on your behalf. Every server should have explicit permission tiers. My setup uses three: Tier 1 (read-only), Tier 2 (writes), Tier 3 (destructive). Don't give a code review tool write access to your production database.
- "More tools = better." Past about 30-40 tools per session, model performance degrades. The model spends more tokens reasoning about which tool to use. I organize my 170+ tools across multiple servers with specific scopes rather than loading all of them into every session.
Where This Is Going
The trajectory is clear. MCP servers are becoming the standard integration layer between AI and everything else. Cloudflare ships one. Stripe is building one. Every major SaaS platform will expose an MCP interface alongside their REST API within the next year.
The more interesting development is agent-to-agent communication via MCP. PAL already does this -- it uses the clink tool to bridge between Claude Code instances, letting one agent delegate subtasks to another. That's the beginning of genuine multi-agent systems, not the demo-ware you see at conferences.
The 97 million number will look quaint by the end of the year. The real question isn't whether MCP wins -- it already has. The question is what happens when every tool, every API, every database, and every service is accessible through a single standardized protocol. That world is about six months away, and I plan to have 20 servers configured and running for it.
If you're configuring MCP servers, building custom ones, or want to compare notes on production infrastructure, reach out. This is the infrastructure layer I've spent the last year and a half obsessing over, and I have opinions about every design decision you'll face.