Model Context Protocol

The LSP of AI agents – a universal tool protocol that turns \(N imes M\) into \(N + M\)

mcp
extensibility
protocol

Introduction: The \(N \times M\) Integration Problem

An AI coding agent needs to connect to many external services – GitHub for pull requests, Postgres for database queries, Jira for issue tracking, Slack for notifications. Without a standard protocol, every agent-service pair requires a bespoke integration: 5 agents talking to 20 services means 100 custom adapters. This is an \(N \times M\) problem. The Model Context Protocol (MCP) standardizes the interface between AI agents and tool services, reducing the cost to \(N + M\): one adapter per agent, one adapter per service, and any agent talks to any service. 5 + 20 = 25 instead of 100.

%%{init: {'theme': 'neutral', 'flowchart': {'useMaxWidth': false, 'htmlLabels': true, 'padding': 20, 'nodeSpacing': 30, 'rankSpacing': 40}, 'themeVariables': {'primaryColor': '#8B9DAF', 'primaryTextColor': '#ffffff', 'primaryBorderColor': '#6E7F91', 'secondaryColor': '#9CAF88', 'secondaryTextColor': '#ffffff', 'secondaryBorderColor': '#7A8D68', 'tertiaryColor': '#C2856E', 'tertiaryTextColor': '#ffffff', 'tertiaryBorderColor': '#A06A54', 'lineColor': '#B5A99A', 'textColor': '#4A4A4A', 'mainBkg': '#8B9DAF', 'nodeBorder': '#6E7F91', 'clusterBkg': 'rgba(139,157,175,0.12)', 'clusterBorder': '#B5A99A', 'edgeLabelBackground': 'transparent'}}}%%
flowchart LR
  subgraph Without["Without Protocol: N x M"]
    direction LR
    c1["Claude Code"] ~~~ s1["GitHub"]
    c2["Cursor"] ~~~ s2["Postgres"]
    c3["Copilot"] ~~~ s3["Jira"]
    c1 --- s1
    c1 --- s2
    c1 --- s3
    c2 --- s1
    c2 --- s2
    c2 --- s3
    c3 --- s1
    c3 --- s2
    c3 --- s3
  end

  subgraph With["With MCP: N + M"]
    direction LR
    c4["Claude Code"] --> MCP["<b>MCP</b>"]
    c5["Cursor"] --> MCP
    c6["Copilot"] --> MCP
    MCP --> s4["GitHub"]
    MCP --> s5["Postgres"]
    MCP --> s6["Jira"]
  end
  style c1 fill:#8B9DAF,color:#fff,stroke:#6E7F91
  style s1 fill:#9CAF88,color:#fff,stroke:#7A8D68
  style c2 fill:#C2856E,color:#fff,stroke:#A06A54
  style s2 fill:#B39EB5,color:#fff,stroke:#8E7A93
  style c3 fill:#C4A882,color:#fff,stroke:#A08562
  style s3 fill:#8E9B7A,color:#fff,stroke:#6E7B5A
  style c4 fill:#8B9DAF,color:#fff,stroke:#6E7F91
  style MCP fill:#9CAF88,color:#fff,stroke:#7A8D68
  style c5 fill:#C2856E,color:#fff,stroke:#A06A54
  style c6 fill:#B39EB5,color:#fff,stroke:#8E7A93
  style s4 fill:#C4A882,color:#fff,stroke:#A08562
  style s5 fill:#8E9B7A,color:#fff,stroke:#6E7B5A
  style s6 fill:#8B9DAF,color:#fff,stroke:#6E7F91
Figure 1: The N x M to N + M reduction illustrated with three clients and three services. Without a standard protocol, every client-server pair requires a bespoke adapter (9 total), and the count grows quadratically. With MCP as a universal intermediary, each side implements one adapter (3 + 3 = 6), and adding a new client or service costs exactly one adapter rather than N or M. This is the same economic argument that drove adoption of ODBC, LSP, and USB.

How to read this diagram. The left subgraph (“Without Protocol”) shows every client connected to every service with 9 individual links – this is the N x M explosion. The right subgraph (“With MCP”) shows the same three clients and three services routed through a single MCP hub, requiring only 6 adapters total. The takeaway is that adding a fourth client or service on the left adds 3 new links, while on the right it adds exactly 1. Source files covered in this post:

File Purpose Size
src/services/mcp/client.ts MCP client implementation (connection, tool registration) ~1,500 LOC
src/services/mcp/auth.ts MCP authentication (OAuth, API keys) ~1,100 LOC
src/services/mcp/config.ts MCP server configuration and discovery ~600 LOC
src/services/mcp/normalization.ts Tool name sanitization and namespacing ~200 LOC
src/services/mcp/types.ts MCP type definitions ~150 LOC
src/tools/MCPTool/ MCP tool invocation and result formatting 6 files
src/tools/ListMcpResourcesTool/ MCP resource listing ~5 files
src/tools/ReadMcpResourceTool/ MCP resource reading ~5 files
src/tools/McpAuthTool/ MCP authentication tool ~3 files

Protocol Design: JSON-RPC over Three Transports

MCP’s wire format is JSON-RPC 2.0 – the same protocol LSP uses. Every message is a JSON object with a method, params, and either an id (for requests expecting responses) or no id (for notifications). This choice is not accidental: JSON-RPC is minimal, language-agnostic, and well-understood. It has none of the overhead of gRPC’s protobuf or GraphQL’s query parsing, and none of the ambiguity of custom text protocols.

The architecture consists of three layers: a configuration layer (where servers are defined), a client layer (where connections are managed), and a transport layer (how bytes move). Understanding the layers explains why MCP can be simultaneously simple to use and complex to implement.

%%{init: {'theme': 'neutral', 'flowchart': {'useMaxWidth': false, 'htmlLabels': true, 'padding': 20, 'nodeSpacing': 30, 'rankSpacing': 40}, 'themeVariables': {'primaryColor': '#8B9DAF', 'primaryTextColor': '#ffffff', 'primaryBorderColor': '#6E7F91', 'secondaryColor': '#9CAF88', 'secondaryTextColor': '#ffffff', 'secondaryBorderColor': '#7A8D68', 'tertiaryColor': '#C2856E', 'tertiaryTextColor': '#ffffff', 'tertiaryBorderColor': '#A06A54', 'lineColor': '#B5A99A', 'textColor': '#4A4A4A', 'mainBkg': '#8B9DAF', 'nodeBorder': '#6E7F91', 'clusterBkg': 'rgba(139,157,175,0.12)', 'clusterBorder': '#B5A99A', 'edgeLabelBackground': 'transparent'}}}%%
flowchart TD
  subgraph Config["Configuration Layer"]
    C1[".claude/<br>settings.json"]
    C2["~/.claude/<br>settings.json"]
    C3[".mcp.json<br>per-directory"]
    C4["claude mcp<br>CLI commands"]
  end

  subgraph Client["Client Layer"]
    CL1["Connection<br>Manager"]
    CL2["Tool<br>Registry"]
    CL3["Resource<br>Manager"]
    CL4["Reconnection<br>Logic"]
  end

  subgraph Transport["Transport Layer"]
    T1["<b>stdio</b><br><i>local process</i>"]
    T2["<b>SSE</b><br><i>remote server</i>"]
    T3["<b>HTTP</b><br><i>OAuth + stream</i>"]
    T4["<b>WebSocket</b><br><i>browser bridge</i>"]
  end

  Config --> Client --> Transport
  style C1 fill:#8B9DAF,color:#fff,stroke:#6E7F91
  style C2 fill:#9CAF88,color:#fff,stroke:#7A8D68
  style C3 fill:#C2856E,color:#fff,stroke:#A06A54
  style C4 fill:#B39EB5,color:#fff,stroke:#8E7A93
  style CL1 fill:#C4A882,color:#fff,stroke:#A08562
  style CL2 fill:#8E9B7A,color:#fff,stroke:#6E7B5A
  style CL3 fill:#8B9DAF,color:#fff,stroke:#6E7F91
  style CL4 fill:#9CAF88,color:#fff,stroke:#7A8D68
  style T1 fill:#C2856E,color:#fff,stroke:#A06A54
  style T2 fill:#B39EB5,color:#fff,stroke:#8E7A93
  style T3 fill:#C4A882,color:#fff,stroke:#A08562
  style T4 fill:#8E9B7A,color:#fff,stroke:#6E7B5A
Figure 2: MCP architecture stack organized into three layers. The configuration layer merges settings from project, user, per-directory, and CLI sources with clear precedence rules. The client layer manages connection state, tool registration, resource access, and reconnection logic. The transport layer abstracts four communication mechanisms – stdio for local processes, SSE for team servers, streamable HTTP with OAuth for enterprise environments, and WebSocket for browser extension bridges – letting the same JSON-RPC messages flow over any channel.

How to read this diagram. Start at the top with the Configuration Layer, where four sources of server definitions feed into the Client Layer below. The Client Layer manages connections, tool registration, resources, and reconnection. Arrows flow downward into the Transport Layer, which shows the four concrete mechanisms (stdio, SSE, HTTP, WebSocket) over which JSON-RPC messages travel. The key takeaway is that the same client logic works regardless of which transport is used at the bottom.

The configuration layer is intentionally pluralistic. Project-level settings (.claude/settings.json) define servers specific to a repository. User-level settings (~/.claude/settings.json) define servers that follow you across projects. The .mcp.json file provides per-directory overrides. The resolution order (project > user > dynamic) means project-specific servers can override user defaults, just as a project’s .bashrc can override ~/.bashrc.


Tool Discovery: Dynamic Capability Registration

When Claude Code starts, it does not know what tools its MCP servers provide. It discovers them through a strict three-step handshake that mirrors capability negotiation in protocols from HTTP/2 to TLS.

Step 1: initialize. The client sends an initialize request containing its own capabilities (which MCP features it supports). The server responds with its capabilities – which features it implements, what tool types it offers, whether it supports subscriptions. This is capability negotiation: both sides declare what they can do, and the intersection determines what actually happens.

Step 2: tools/list. The client calls tools/list to discover every tool the server provides. Each tool comes with a name, description, JSON Schema for its input parameters, and an optional annotations object carrying metadata like readOnly and destructive. The response is paginated for servers with many tools.

Step 3: register. For each discovered tool, the client creates a local registration using a deterministic naming scheme:

mcp__{sanitize(serverName)}__{sanitize(toolName)}

The sanitize function replaces every non-alphanumeric character with an underscore. This produces names like mcp__github__create_issue or mcp__postgres__run_query. The double-underscore delimiter is chosen because it is unlikely to appear in natural tool or server names, making the name reliably parseable in both directions. This is a classic namespace flattening technique – the same approach Python uses for name mangling (__private becomes _ClassName__private) and DNS uses for service records (_sip._tcp.example.com).

%%{init: {'theme': 'neutral', 'flowchart': {'useMaxWidth': false, 'htmlLabels': true, 'padding': 20, 'nodeSpacing': 30, 'rankSpacing': 40}, 'themeVariables': {'primaryColor': '#8B9DAF', 'primaryTextColor': '#ffffff', 'primaryBorderColor': '#6E7F91', 'secondaryColor': '#9CAF88', 'secondaryTextColor': '#ffffff', 'secondaryBorderColor': '#7A8D68', 'tertiaryColor': '#C2856E', 'tertiaryTextColor': '#ffffff', 'tertiaryBorderColor': '#A06A54', 'lineColor': '#B5A99A', 'textColor': '#4A4A4A', 'mainBkg': '#8B9DAF', 'nodeBorder': '#6E7F91', 'clusterBkg': 'rgba(139,157,175,0.12)', 'clusterBorder': '#B5A99A', 'edgeLabelBackground': 'transparent'}}}%%
sequenceDiagram
  participant CC as Claude Code
  participant SRV as MCP Server

  Note over CC,SRV: Step 1 - Capability Negotiation
  CC->>SRV: initialize {capabilities: {tools, resources}}
  SRV->>CC: result {capabilities: {tools: {list: true}}}

  Note over CC,SRV: Step 2 - Tool Discovery
  CC->>SRV: tools/list
  SRV->>CC: [create_issue, list_repos, search_code, ...]

  Note over CC: Step 3 - Local Registration
  CC->>CC: Register mcp__github__create_issue
  CC->>CC: Register mcp__github__list_repos
  CC->>CC: Register mcp__github__search_code

  Note over CC,SRV: Tools now appear in system prompt<br>alongside Read, Write, Bash
Figure 3: The three-step MCP handshake between Claude Code and an MCP server. Step 1 (initialize) performs capability negotiation where both sides declare supported features. Step 2 (tools/list) discovers every tool the server provides, with JSON Schema for each. Step 3 registers each tool locally under a deterministic namespaced name (mcp__server__tool). After this handshake, MCP tools are indistinguishable from built-in tools like Read, Write, and Bash in the model’s context window.

How to read this diagram. Time flows downward. Claude Code (left) initiates the conversation by sending an initialize request to the MCP Server (right), which responds with its capabilities. Next, Claude Code requests the tool list and receives the available tools. In Step 3, Claude Code talks to itself – registering each discovered tool under a namespaced name like mcp__github__create_issue. After the three steps complete, MCP tools appear alongside built-in tools in the system prompt.

Once registered, MCP tools are indistinguishable from built-in tools in the model’s context window. The model sees mcp__github__create_issue alongside Read and Write in its tool list, calls it with a tool_use block, and receives a tool_result block. The routing to the MCP server is completely transparent. This is the key design property: MCP tools are first-class citizens in the tool registry.

CautionPattern Spotted

The three-step handshake is the capability negotiation pattern found throughout networking. HTTP/2 uses SETTINGS frames for the same purpose: client and server declare what they support, and the connection uses the intersection. TLS does it with cipher suite negotiation. WiFi does it with association frames. The pattern is always: declare, intersect, proceed with the common ground.


Transport Layers: stdio, SSE, Streamable HTTP

MCP defines the messages; transports define how those messages move. Each transport mechanism serves a different deployment scenario with different trade-offs in latency, security, and operational complexity.

stdio: The Local Default

The stdio transport spawns a local process and pipes JSON-RPC messages over stdin/stdout. The MCP client calls child_process.spawn() with the configured command, writes JSON to the child’s stdin, and reads JSON from the child’s stdout. Stderr is captured separately for diagnostics – it is not part of the JSON-RPC channel, so the server can safely log debug information to stderr without corrupting the protocol stream.

This is by far the most common transport, and for good reason: it has the lowest operational complexity. No ports to allocate, no certificates to manage, no firewall rules to configure. A typical configuration:

{
  "mcpServers": {
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": { "GITHUB_TOKEN": "${GITHUB_TOKEN}" }
    }
  }
}

The env field enables secret injection without hardcoding tokens – ${GITHUB_TOKEN} is resolved from the shell environment at spawn time, keeping credentials out of configuration files. The limitation is locality: stdio servers must run on the same machine as Claude Code.

SSE: Remote Servers via HTTP

The SSE (Server-Sent Events) transport uses HTTP for the request path and SSE for the response path. The client sends JSON-RPC requests via HTTP POST; the server holds open an SSE connection and streams responses back as events. This enables a team to run shared MCP servers on internal infrastructure. One Postgres MCP server serves the entire team, with each developer’s Claude Code connecting over HTTP.

SSE also enables a deployment pattern impossible with stdio: centralized upgrades. When the team’s Postgres MCP server is updated to support a new tool, every developer gets it immediately on their next connection – no local package update, no restart required.

HTTP: Streamable with OAuth

The HTTP transport is the enterprise-grade evolution of SSE. It uses streamable HTTP with OAuth 2.0 support – PKCE flows, device code grants, token refresh. The auth implementation alone is 88,754 bytes in the codebase, reflecting the complexity of enterprise identity. HTTP with OAuth is necessary when the MCP server sits behind an identity provider and needs per-user authorization.

WebSocket: The Browser Bridge

The WebSocket transport is specialized: it connects Claude Code to a Chrome browser extension that exposes browser automation tools. The extension registers 8 tools including javascript_tool, read_page, find, form_input, computer, navigate, resize_window, and gif_creator. The WebSocket transport has a 2,000ms timeout for tabs context discovery, reflecting its purpose: browser automation should be fast.

Transport Latency Auth Deployment
stdio < 1ms Process isolation Local dev, single user
SSE 10–100ms Network/token Team servers, internal
HTTP 50–200ms OAuth 2.0 (PKCE) Enterprise, IdP
WebSocket 5–50ms Extension perms Browser automation

Security Model: Permissions and Annotations

MCP tools do not bypass Claude Code’s permission system. They go through the same three-tier architecture as built-in tools (see Part IV.2), with annotations providing the metadata that permissions need.

Every MCP tool can carry an annotations object with two key fields:

  • readOnly: true – The tool does not modify external state. In permissive permission modes, read-only tools may be auto-approved without prompting the user.
  • destructive: true – The tool modifies external state in a way that may be difficult to reverse. Destructive tools always prompt for confirmation, even in the most permissive mode.
%%{init: {'theme': 'neutral', 'flowchart': {'useMaxWidth': false, 'htmlLabels': true, 'padding': 20, 'nodeSpacing': 30, 'rankSpacing': 40}, 'themeVariables': {'primaryColor': '#8B9DAF', 'primaryTextColor': '#ffffff', 'primaryBorderColor': '#6E7F91', 'secondaryColor': '#9CAF88', 'secondaryTextColor': '#ffffff', 'secondaryBorderColor': '#7A8D68', 'tertiaryColor': '#C2856E', 'tertiaryTextColor': '#ffffff', 'tertiaryBorderColor': '#A06A54', 'lineColor': '#B5A99A', 'textColor': '#4A4A4A', 'mainBkg': '#8B9DAF', 'nodeBorder': '#6E7F91', 'clusterBkg': 'rgba(139,157,175,0.12)', 'clusterBorder': '#B5A99A', 'edgeLabelBackground': 'transparent'}}}%%
flowchart TD
  toolcall["Model outputs<br>tool use block"]
  parse["findToolByName()<br>Is this mcp__* ?"]
  splitns["Parse server and tool<br>from namespaced name"]
  perm{"Check annotations<br>and permission mode"}

  approve["Auto-approve<br>readOnly, permissive"]
  prompt["Prompt user<br>destructive or unknown"]
  blocked["Block<br>denied by policy"]

  route["Route to MCP server<br>return tool result"]

  toolcall --> parse --> splitns --> perm
  perm --> approve
  perm --> prompt
  perm --> blocked
  approve --> route
  prompt --> route
  style toolcall fill:#8B9DAF,color:#fff,stroke:#6E7F91
  style parse fill:#9CAF88,color:#fff,stroke:#7A8D68
  style splitns fill:#C2856E,color:#fff,stroke:#A06A54
  style perm fill:#B39EB5,color:#fff,stroke:#8E7A93
  style approve fill:#C4A882,color:#fff,stroke:#A08562
  style prompt fill:#8E9B7A,color:#fff,stroke:#6E7B5A
  style blocked fill:#8B9DAF,color:#fff,stroke:#6E7F91
  style route fill:#9CAF88,color:#fff,stroke:#7A8D68
Figure 4: Permission flow for MCP tools, which follows the identical three-tier path as built-in tools. The namespaced tool name is parsed to identify the originating server, then annotations (readOnly, destructive) and the current permission mode determine the outcome: auto-approve, prompt the user, or block. This uniform treatment is the Liskov Substitution Principle applied at the tool level – no special-case branching for MCP tools anywhere in the permission or hook code.

How to read this diagram. Start at the top where the model outputs a tool use block. The flow moves downward through name parsing and server/tool extraction, arriving at the diamond-shaped decision node that checks annotations and the current permission mode. Three branches fan out: auto-approve (for read-only tools in permissive mode), prompt the user (for destructive or unknown tools), or block (if denied by policy). Both the approve and prompt paths converge on routing the call to the MCP server, while the block path is a dead end.

Pre-tool-use and post-tool-use hooks also fire for MCP tools. A hook configured to match mcp__github__* will intercept every GitHub MCP tool call. This means the same audit logging, policy enforcement, and automation that works for built-in tools works identically for MCP tools. The model, the permission system, and the hook system all treat MCP tools as first-class.


Server Examples and the Three Tool Tiers

Claude Code operates with three categories of tools, each running in a different process with different trust and performance characteristics.

Built-in tools are TypeScript functions compiled into the Claude Code binary. They run in-process, have direct access to the filesystem and shell, and are subject to the local platform sandbox (see Part IV.2). They are the fastest tools because there is no inter-process communication.

MCP tools run in a separate process or on a remote server. They communicate via JSON-RPC, which adds serialization overhead and network latency. But they can be written in any language and can access services that are not available locally – GitHub, Postgres, Jira, and any custom API.

Server-side tools are a third category that many users do not realize exists. When the Anthropic API response contains a server_tool_use content block, it means the model invoked a tool that ran on Anthropic’s infrastructure – not on the user’s machine. web_search, web_fetch, and code_execution are server-side tools. They do not go through Claude Code’s permission system or sandbox, because they never execute locally.

Attribute Built-in MCP Server-side
Location In-process Cross-process/remote Anthropic cloud
Latency < 0.1ms 1–200ms 200ms–2s
Count 40+ Unlimited 3–5
Sandbox OS-level Server-side Managed
Permission 3-tier 3-tier + annotations API-level
Language TypeScript Any N/A

Anthropic also operates a cloud MCP proxy at mcp-proxy.anthropic.com that hosts managed servers, eliminating the need for local installation. The proxy supports two patterns: direct proxy (the server runs entirely in Anthropic’s cloud) and toolbox proxy (Anthropic’s infrastructure proxies to an external service). This creates a two-tier ecosystem analogous to AWS: managed services for common needs and the open protocol for everything custom.

Connection Lifecycle: The State Machine

Each MCP server connection is managed by a state machine with six states. The state machine handles the messy reality of long-running connections: servers crash, networks drop, processes exit unexpectedly.

The reconnection logic uses exponential backoff with a floor of 1,000ms and a ceiling of 30,000ms. After 5 failed attempts, the server transitions to error state and stops reconnecting. The user can manually retry or disable the server entirely. A disabled server is a terminal state – it will not reconnect until explicitly re-enabled. The tool call timeout is 120,000ms (2 minutes), accommodating long-running operations like database migrations or complex API queries.

A subtle race condition arises when the model calls an MCP tool while the server is reconnecting. The tool call is queued and executed once the connection is re-established – or fails if the server transitions to error or disabled. This queuing prevents the model from seeing transient connection failures as permanent tool unavailability.

CautionPattern Spotted

This is the Circuit Breaker pattern from distributed systems. A circuit breaker has three states: closed (requests flow through), open (requests are rejected), and half-open (a test request determines if the service has recovered). MCP’s state machine has the same structure: connected is closed, error/disabled is open, and reconnecting is half-open. The exponential backoff prevents thundering herd problems, just as a circuit breaker prevents cascading failures.

Cache Economics of MCP Instructions

MCP servers can inject instructions into Claude Code’s system prompt – text explaining how to use the server’s tools, domain context (such as a database schema), or conventions (“always use parameterized queries”). This is architecturally significant: the MCP instructions section is the only cache-breaking region in Claude Code’s entire system prompt.

The system prompt is roughly 15,000+ tokens of carefully-crafted instructions. Anthropic’s prompt caching stores this verbatim and reuses it across turns, saving 90% of input token costs. But caching requires byte-for-byte identity. MCP servers can connect and disconnect between turns, so their instructions must be recomputed every time.

A 200-token MCP instruction costs 200 tokens at full price on every turn. The same 200 tokens in the cached region would cost only 10% per turn – a 10x difference. Over a 50-turn session, that is 10,000 token-equivalents (cache-breaking) versus 1,000 token-equivalents (cached). The implication is direct: MCP server authors should keep their instructions as short as possible.

WarningTrade-off

MCP instructions give server authors powerful control over how the model uses their tools – but at a direct, per-turn cost. The design deliberately accepts this cost because the alternative (no server instructions, model must figure out tools from schema alone) leads to worse tool usage and more wasted turns. The right approach is not to avoid instructions but to write them as concisely as possible.


Summary

The Model Context Protocol reveals several design principles that extend beyond AI agents into any system integrating with a growing ecosystem of services.

The \(N \times M\) to \(N + M\) reduction is the single most important insight in protocol design. When client-service pairs grow quadratically, a standard protocol is not a luxury – it is an economic necessity. MCP, like ODBC and LSP before it, trades protocol standardization effort (paid once) for integration effort (paid \(N \times M\) times). The math always favors the standard once M exceeds 3 or 4.

Cache-breaking injection is the primary engineering cost. The protocol itself is simple. The transports are well-understood. The real complexity lies in the cache economics: MCP server instructions are the only volatile section in a 15K+ token system prompt, and every token there costs 10x what it would cost in the cached region. This is a concrete, measurable trade-off.

First-class citizenship for external tools is non-negotiable. MCP tools go through the same permission system, the same hook pipeline (see Part III.4), and the same result formatting as built-in tools. If they did not, the system would need if (isMCPTool) branches throughout the codebase, and every new feature would need to be implemented twice. The Liskov Substitution Principle is not an academic nicety here; it is what makes MCP operationally viable.

Transport abstraction enables ecosystem growth. By decoupling the protocol from the transport, MCP can serve local processes (stdio), team servers (SSE), enterprise environments (HTTP + OAuth), and browser extensions (WebSocket) with the same message format. Each new transport expands the ecosystem’s reach without changing the protocol – the same insight that made TCP/IP successful.


Next in the series: Part III.4: Hooks & Lifecycle Events – over 25 lifecycle events for safety, auditing, and behavior modification without touching core code.