%%{init: {'theme': 'neutral', 'flowchart': {'useMaxWidth': false, 'htmlLabels': true, 'padding': 20, 'nodeSpacing': 30, 'rankSpacing': 40}, 'themeVariables': {'primaryColor': '#8B9DAF', 'primaryTextColor': '#ffffff', 'primaryBorderColor': '#6E7F91', 'secondaryColor': '#9CAF88', 'secondaryTextColor': '#ffffff', 'secondaryBorderColor': '#7A8D68', 'tertiaryColor': '#C2856E', 'tertiaryTextColor': '#ffffff', 'tertiaryBorderColor': '#A06A54', 'lineColor': '#B5A99A', 'textColor': '#4A4A4A', 'mainBkg': '#8B9DAF', 'nodeBorder': '#6E7F91', 'clusterBkg': 'rgba(139,157,175,0.12)', 'clusterBorder': '#B5A99A', 'edgeLabelBackground': 'transparent'}}}%%
flowchart LR
subgraph Clients
web["claude.ai<br>Web UI"]
vscode["VS Code<br>Extension"]
jetbrains["JetBrains<br>Plugin"]
teleport["Direct<br>Connect"]
end
subgraph Bridge["Bridge Layer"]
bridgemain["<b>bridgeMain</b>"]
transports["Transports<br><i>WebSocket / SSE / stdio</i>"]
messaging["<b>bridgeMessaging</b>"]
perms["Permission<br>Callbacks"]
config["bridgeConfig<br><i>OAuth / JWT</i>"]
bridgemain --- transports
transports --- messaging
messaging --- perms
perms --- config
end
subgraph Engine["Agent Engine"]
repl["REPL Loop"]
tools["Tool<br>Execution"]
session["Session<br>Storage"]
safety["Safety<br>Policy"]
end
web <--> bridgemain
vscode --> messaging
jetbrains --> perms
teleport --> config
bridgemain <--> repl
messaging --> tools
perms --> session
config --> safety
style web fill:#8B9DAF,color:#fff,stroke:#6E7F91
style vscode fill:#9CAF88,color:#fff,stroke:#7A8D68
style jetbrains fill:#C2856E,color:#fff,stroke:#A06A54
style teleport fill:#B39EB5,color:#fff,stroke:#8E7A93
style bridgemain fill:#C4A882,color:#fff,stroke:#A08562
style transports fill:#8E9B7A,color:#fff,stroke:#6E7B5A
style messaging fill:#8B9DAF,color:#fff,stroke:#6E7F91
style perms fill:#9CAF88,color:#fff,stroke:#7A8D68
style config fill:#C2856E,color:#fff,stroke:#A06A54
style repl fill:#B39EB5,color:#fff,stroke:#8E7A93
style tools fill:#C4A882,color:#fff,stroke:#A08562
style session fill:#8E9B7A,color:#fff,stroke:#6E7B5A
style safety fill:#8B9DAF,color:#fff,stroke:#6E7F91
Remote Runtime & Bridge Layer
How Claude Code escapes the local terminal – IDE bridges, remote sessions, and transport abstractions
Introduction: The Locality Assumption and Its Limits
Most developer tools assume they run on the same machine where the code lives. The compiler, the editor, the debugger – they all access files through the local filesystem and interact with the user through a local terminal or UI. This assumption simplifies everything: no network latency, no authentication boundaries, no partial failures. But it also creates a ceiling. The moment you want to run an AI agent on a remote container while approving its actions from a browser, or embed it inside an IDE extension while the engine runs as a separate process, the locality assumption breaks.
Claude Code’s architecture confronts this problem through three interlocking subsystems. The bridge layer (src/bridge/, 31 files, ~12,600 LOC) handles the connection between the local agent engine and remote orchestrators like the claude.ai web interface. The remote session layer (src/remote/, 4 files) manages the client-side view of a cloud-hosted session – subscribing to events via WebSocket and sending messages via HTTP. The transport abstraction (src/cli/transports/) ensures that regardless of whether the underlying wire protocol is WebSocket, SSE, or stdio, the rest of the system sees a uniform interface.
The deeper design challenge is not networking per se – it is preserving the permission model across process boundaries. When Claude Code runs locally, a tool permission request is a function call within a single process. When the agent runs on a remote container and the user sits in a browser, that same permission request must traverse a WebSocket, be rendered in a completely different UI framework, and have its response routed back to the exact pending tool invocation. The system must handle this transparently: neither the agent loop nor the tool implementation should know or care whether permissions are being resolved locally or remotely.
How to read this diagram. Start on the left with the four client types (claude.ai, VS Code, JetBrains, Direct Connect) that connect into the central Bridge Layer. Within the bridge, five submodules are chained: bridgeMain handles orchestration, Transports abstracts the wire protocol, bridgeMessaging routes messages and deduplicates echoes, Permission Callbacks handles cross-process permission flow, and bridgeConfig manages authentication. On the right, the Agent Engine contains the REPL loop, tool execution, session storage, and safety policy. The arrows show that every client request must traverse the bridge’s five stages before reaching the engine.
Source files covered in this post:
| File | Purpose | Size |
|---|---|---|
src/bridge/bridgeMain.ts |
Server-facing orchestration loop | ~700 LOC |
src/bridge/bridgeMessaging.ts |
Protocol core (ingress routing, echo dedup, control requests) | ~400 LOC |
src/bridge/bridgeConfig.ts |
Auth and URL resolution (dev overrides + production OAuth) | ~200 LOC |
src/bridge/bridgeEnabled.ts |
Bridge entitlement check (isBridgeEnabled()) |
~100 LOC |
src/bridge/remoteBridgeCore.ts |
Env-less bridge with three-step handshake | ~500 LOC |
src/bridge/bridgePermissionCallbacks.ts |
Cross-process permission protocol | ~200 LOC |
src/bridge/types.ts |
WorkSecret and bridge type definitions | ~150 LOC |
src/remote/RemoteSessionManager.ts |
Dual-channel session management | ~400 LOC |
src/remote/SessionsWebSocket.ts |
WebSocket connection lifecycle | ~300 LOC |
src/remote/sdkMessageAdapter.ts |
SDK message adapter for remote consumption | ~302 LOC |
src/upstreamproxy/ |
Upstream HTTPS proxy for CCR containers | 2 files |
The Bridge Subsystem: 31 Files of IDE Integration
The bridge subsystem is the largest single integration layer in Claude Code. Its 31 files implement everything needed to connect a running agent instance to an external orchestrator – whether that is the claude.ai web application, a VS Code extension, or a JetBrains plugin.
Bridge Configuration and Authentication
The entry point for bridge authentication is bridgeConfig.ts, which consolidates all auth and URL resolution into a single module. The design follows a two-layer pattern: an override layer for development (environment variables like CLAUDE_BRIDGE_OAUTH_TOKEN) and a production layer that reads from the OAuth keychain:
// bridgeConfig.ts — two-layer auth resolution
export function getBridgeAccessToken(): string | undefined {
return getBridgeTokenOverride() ?? getClaudeAIOAuthTokens()?.accessToken
}
export function getBridgeBaseUrl(): string {
return getBridgeBaseUrlOverride() ?? getOauthConfig().BASE_API_URL
}This pattern appears throughout the codebase – dev overrides take priority over production defaults, but only when an explicit USER_TYPE=ant flag is set. The guard prevents accidental use of override tokens in production.
The bridge entitlement check in bridgeEnabled.ts implements a multi-condition gate: the user must be a claude.ai subscriber (excluding Bedrock/Vertex/API-key users), must have a full-scope OAuth token (not a setup token), and must pass a GrowthBook feature gate (tengu_ccr_bridge). The blocking variant isBridgeEnabledBlocking() awaits the server-side feature flag if the disk cache misses, while the non-blocking isBridgeEnabled() uses the cached value – a subtle distinction that determines whether the user sees a false “not enabled” at cold start.
The Bridge Main Loop
The bridgeMain.ts file (~700+ lines) implements the server-facing orchestration loop for multi-session bridge mode. When a user runs claude remote-control, this loop:
- Registers with the Environments API as a worker
- Polls for incoming work items (new sessions from claude.ai)
- Spawns child CLI processes for each session, passing a work secret containing the session ingress token, API base URL, and git source configuration
- Heartbeats active sessions and handles session lifecycle (completion, failure, timeout)
The work secret deserves attention. Each session dispatched by the server arrives with a base64url-encoded JSON payload containing credentials and configuration:
// types.ts — WorkSecret structure
export type WorkSecret = {
version: number
session_ingress_token: string
api_base_url: string
sources: Array<{
type: string
git_info?: { type: string; repo: string; ref?: string; token?: string }
}>
auth: Array<{ type: string; token: string }>
claude_code_args?: Record<string, string> | null
mcp_config?: unknown | null
environment_variables?: Record<string, string> | null
use_code_sessions?: boolean
}The sources field enables git-context injection: when a session is created from a GitHub repository link in claude.ai, the bridge receives the repo URL and ref so the child process can clone and checkout the correct code. The use_code_sessions flag selects between the v1 (Environments API) and v2 (Code Sessions API) transport protocols.
Bridge Messaging: The Protocol Core
bridgeMessaging.ts is the protocol heart of the bridge. It defines how messages flow between the agent and the remote client, handling three critical concerns: ingress routing, echo deduplication, and control request handling.
The ingress routing function handleIngressMessage() parses each WebSocket frame and dispatches it based on type. The type discrimination is precise – control responses, control requests, and SDK messages each take different code paths:
export function handleIngressMessage(
data: string,
recentPostedUUIDs: BoundedUUIDSet,
recentInboundUUIDs: BoundedUUIDSet,
onInboundMessage, onPermissionResponse, onControlRequest,
): void {
const parsed = normalizeControlMessageKeys(jsonParse(data))
if (isSDKControlResponse(parsed)) { onPermissionResponse?.(parsed); return }
if (isSDKControlRequest(parsed)) { onControlRequest?.(parsed); return }
if (!isSDKMessage(parsed)) return
// UUID-based echo dedup
const uuid = 'uuid' in parsed ? parsed.uuid : undefined
if (uuid && recentPostedUUIDs.has(uuid)) return // our own echo
if (uuid && recentInboundUUIDs.has(uuid)) return // re-delivery
// ...forward to handler
}The echo deduplication uses a BoundedUUIDSet – a FIFO ring buffer backed by both an array (for eviction ordering) and a Set (for O(1) lookup). When the transport sends a message, its UUID is recorded. When the same message comes back as a server echo, the UUID match suppresses it. This is necessary because the bridge often receives its own messages reflected back by the session ingress layer. ### Control Request Handling
When the server sends a control request (e.g., initialize, set_model, interrupt, set_permission_mode), the bridge must respond promptly – the server kills the WebSocket connection after approximately 10-14 seconds of silence. The handleServerControlRequest() function dispatches on the request subtype:
switch (request.request.subtype) {
case 'initialize':
// Respond with minimal capabilities
response = { type: 'control_response', response: {
subtype: 'success', request_id: request.request_id,
response: { commands: [], output_style: 'normal', models: [], account: {}, pid: process.pid }
}}
break
case 'set_model': onSetModel?.(request.request.model); break
case 'interrupt': onInterrupt?.(); break
case 'set_permission_mode': /* policy verdict logic */ break
}The outbound-only mode is particularly interesting. When a bridge session is configured as outbound-only (for mirror mode or SDK /bridge subpath), all mutable control requests return an error response instead of false-success. The initialize request still succeeds – the server kills the connection otherwise – but interrupt, set_model, and set_permission_mode reply with an explicit error message:
“This session is outbound-only. Enable Remote Control locally to allow inbound control.”
Message Eligibility Filtering
Not all internal agent messages should cross the bridge boundary. The isEligibleBridgeMessage() function acts as a gateway filter: only user and assistant messages (plus slash-command system events) are forwarded. Virtual messages (inner REPL calls during tool execution) are suppressed – the bridge consumer sees only the high-level tool_use/tool_result summary, not the internal subqueries. This filtering is essential for keeping the remote UI coherent; without it, tool execution internals would leak into the conversation view.
Remote Sessions: Dual-Channel Communication
The remote session layer (src/remote/) implements the client-side protocol for connecting to a cloud-hosted Claude Code session. This is the inverse of the bridge: while the bridge runs alongside the agent and pushes events outward, the remote session manager runs alongside the UI (terminal or web) and pulls events inward.
The Dual-Channel Architecture
Remote sessions use a split-channel design: a WebSocket for server-push events (assistant messages, permission requests, status updates) and HTTP POST for client-initiated actions (sending user messages). This split is not arbitrary – it reflects a fundamental asymmetry in the communication pattern.
%%{init: {'theme': 'neutral', 'flowchart': {'useMaxWidth': false, 'htmlLabels': true, 'padding': 20, 'nodeSpacing': 30, 'rankSpacing': 40}, 'themeVariables': {'primaryColor': '#8B9DAF', 'primaryTextColor': '#ffffff', 'primaryBorderColor': '#6E7F91', 'secondaryColor': '#9CAF88', 'secondaryTextColor': '#ffffff', 'secondaryBorderColor': '#7A8D68', 'tertiaryColor': '#C2856E', 'tertiaryTextColor': '#ffffff', 'tertiaryBorderColor': '#A06A54', 'lineColor': '#B5A99A', 'textColor': '#4A4A4A', 'mainBkg': '#8B9DAF', 'nodeBorder': '#6E7F91', 'clusterBkg': 'rgba(139,157,175,0.12)', 'clusterBorder': '#B5A99A', 'edgeLabelBackground': 'transparent'}}}%%
flowchart TD
CLIENT["<b>Local Client</b><br><i>Terminal / Web</i>"]
SERVER["<b>CCR Container</b><br><i>Agent Engine</i>"]
subgraph WS["WebSocket Push Channel (Server to Client)"]
direction LR
AM["assistant msgs"]
CR["control request"]
SE["stream event"]
end
subgraph HTTP["HTTP POST Channel (Client to Server)"]
direction LR
UM["user messages"]
AT["attachments"]
end
SERVER -- "push" --> WS -- "deliver" --> CLIENT
CLIENT -- "send" --> HTTP -- "deliver" --> SERVER
CLIENT <-. "control response<br><i>permission allow/deny within WS</i>" .-> SERVER
style CLIENT fill:#8B9DAF,color:#fff,stroke:#6E7F91
style SERVER fill:#9CAF88,color:#fff,stroke:#7A8D68
style AM fill:#C2856E,color:#fff,stroke:#A06A54
style CR fill:#B39EB5,color:#fff,stroke:#8E7A93
style SE fill:#C4A882,color:#fff,stroke:#A08562
style UM fill:#8E9B7A,color:#fff,stroke:#6E7B5A
style AT fill:#8B9DAF,color:#fff,stroke:#6E7F91
How to read this diagram. The Local Client (top) and CCR Container (bottom) communicate through two asymmetric channels. The solid downward arrows show the WebSocket push channel carrying assistant messages, control requests, and stream events from server to client. The solid upward arrows show the HTTP POST channel carrying user messages and attachments from client to server. The dashed bidirectional arrow represents the control subprotocol (permission allow/deny) that runs within the WebSocket. The key insight is that reads and writes use different transports optimized for their distinct reliability and latency requirements.
The push channel is a WebSocket subscription at /v1/sessions/ws/{sessionId}/subscribe. Authentication happens via HTTP headers on the upgrade request (not a post-connect auth message, unlike some older protocols). The client receives a stream of typed messages: SDKMessage for conversation content, control_request for permission prompts, and control_cancel_request for server-side cancellations of pending prompts.
The request channel uses sendEventToRemoteSession() – an HTTP POST to a teleport API endpoint. This asymmetry (WebSocket for push, HTTP for requests) exists because user messages are infrequent but must be reliably delivered, while server events are high-volume and benefit from the persistent connection’s lower latency.
The RemoteSessionManager
The RemoteSessionManager class coordinates the two channels and manages the permission lifecycle:
export class RemoteSessionManager {
private websocket: SessionsWebSocket | null = null
private pendingPermissionRequests: Map<string, SDKControlPermissionRequest>
connect(): void {
this.websocket = new SessionsWebSocket(
this.config.sessionId, this.config.orgUuid,
this.config.getAccessToken, wsCallbacks,
)
void this.websocket.connect()
}
async sendMessage(content, opts?): Promise<boolean> {
return await sendEventToRemoteSession(this.config.sessionId, content, opts)
}
respondToPermissionRequest(requestId, result): void {
this.pendingPermissionRequests.delete(requestId)
this.websocket?.sendControlResponse(/* ... */)
}
}The permission flow is the most architecturally significant aspect. When the remote agent needs tool permission, the server sends a control_request with subtype can_use_tool. The manager stores it in pendingPermissionRequests, invokes the onPermissionRequest callback (which renders the prompt in the local UI), and waits. When the user responds, respondToPermissionRequest() serializes the verdict as a control_response and sends it back through the WebSocket. The entire flow is asynchronous and cancellable – the server can send a control_cancel_request at any time to dismiss a stale prompt.
WebSocket Resilience
The SessionsWebSocket class implements a connection lifecycle with three resilience mechanisms:
Reconnection with backoff: On transient close, the client schedules a reconnect after 2 seconds, up to 5 attempts. A separate counter handles the
4001(session not found) code, which can be transient during server-side compaction – up to 3 retries with linearly increasing delay.Permanent close detection: Close codes
4003(unauthorized) halt reconnection immediately. The client distinguishes between “server rejected permanently” and “connection dropped transiently.”Keep-alive pinging: A 30-second ping interval maintains the connection through proxies and load balancers. Without this, intermediate infrastructure (Cloudflare, GKE ingress) would close idle connections.
const PERMANENT_CLOSE_CODES = new Set([4003]) // unauthorized
const MAX_SESSION_NOT_FOUND_RETRIES = 3 // 4001 during compaction
private handleClose(closeCode: number): void {
if (PERMANENT_CLOSE_CODES.has(closeCode)) {
this.callbacks.onClose?.() // permanent — stop
return
}
if (closeCode === 4001) {
this.sessionNotFoundRetries++
if (this.sessionNotFoundRetries > MAX_SESSION_NOT_FOUND_RETRIES) {
this.callbacks.onClose?.()
return
}
this.scheduleReconnect(RECONNECT_DELAY_MS * this.sessionNotFoundRetries, ...)
return
}
// Transient — attempt reconnect
if (this.reconnectAttempts < MAX_RECONNECT_ATTEMPTS) {
this.reconnectAttempts++
this.scheduleReconnect(RECONNECT_DELAY_MS, ...)
}
}The viewerOnly configuration mode suppresses interrupt signals and disables the 60-second reconnect timeout. This is used by claude assistant, which observes a session without controlling it – a read-only mirror that should never accidentally kill a running agent.
Transport Abstraction: One Interface, Three Wire Protocols
The src/cli/transports/ directory implements a transport abstraction that makes the rest of the system wire-protocol-agnostic. Whether bytes flow over WebSocket, Server-Sent Events, or standard I/O, the consumer sees the same ReplBridgeTransport interface.
The Transport Interface
The ReplBridgeTransport type captures the full surface area that the bridge needs from any transport:
export type ReplBridgeTransport = {
write(message: StdoutMessage): Promise<void>
writeBatch(messages: StdoutMessage[]): Promise<void>
close(): void
isConnectedStatus(): boolean
getLastSequenceNum(): number
reportState(state: SessionState): void
reportDelivery(eventId: string, status: 'processing' | 'processed'): void
flush(): Promise<void>
// ...callbacks for onData, onClose, onConnect
}Two implementations exist, corresponding to the v1 and v2 transport protocols:
v1 (
createV1ReplTransport): wrapsHybridTransport, which uses a WebSocket for reads and HTTP POST for writes. The POST path goes throughSerialBatchEventUploaderwith exponential backoff, jitter, and a configurable failure cap.v2 (
createV2ReplTransport): wrapsSSETransportfor reads andCCRClientfor writes. This is the newer Code Sessions API path, where each session has a dedicated worker endpoint. The v2 path additionally supports epoch-based worker registration, delivery acknowledgments (received/processed), and state reporting (requires_actionfor permission prompts).
The HybridTransport: WebSocket Reads, HTTP Writes
The HybridTransport extends WebSocketTransport and overrides the write path. Instead of writing to the WebSocket (which would require the server to accept bidirectional traffic), it accumulates events and POSTs them to a converted HTTP endpoint:
WS URL: wss://api.example.com/v2/session_ingress/ws/<session_id>
POST URL: https://api.example.com/v2/session_ingress/session/<session_id>/events
Stream events (high-volume content deltas) are buffered for 100ms before enqueueing, reducing the POST count during rapid streaming. Non-stream writes flush any buffered stream events first to preserve ordering. This micro-batching is a classic throughput-vs-latency tradeoff: 100ms of added latency on stream events buys significantly fewer HTTP requests.
The write pipeline delegates to SerialBatchEventUploader, which ensures at most one POST is in-flight at any time. This serialization is critical: bridge mode fires writes via void transport.write() (fire-and-forget), and without serialization, concurrent POSTs to the same Firestore document would cause write collisions, retry storms, and pager alerts.
SSE Transport and Sequence Numbers
The SSETransport implements the read side for v2 sessions. The key innovation is sequence number tracking: each SSE frame carries an id field (the event sequence number), and when the transport reconnects, it sends Last-Event-ID / from_sequence_num so the server resumes from where the previous stream left off instead of replaying the entire session history.
This is critical for transport swaps – when a JWT refresh forces a new transport, the bridge reads getLastSequenceNum() from the old transport, passes it to the new one’s constructor, and the server skips already-delivered events. Without this, every token refresh would replay the full session, generating phantom “user sent a new message” system reminders.
Direct Connect: Portable Sessions
The direct-connect subsystem (src/server/) enables a different deployment model: instead of connecting through the cloud infrastructure, a client connects directly to a Claude Code instance running as a local server. This is the teleport mechanism – a session that can be attached from different clients without cloud intermediation.
The createDirectConnectSession() function POSTs to a local server’s /sessions endpoint, passing the working directory and permission configuration. The server responds with a session ID and a WebSocket URL:
export async function createDirectConnectSession({
serverUrl, authToken, cwd, dangerouslySkipPermissions,
}): Promise<{ config: DirectConnectConfig; workDir?: string }> {
const resp = await fetch(`${serverUrl}/sessions`, {
method: 'POST',
headers, body: jsonStringify({ cwd, ...permissionOpts }),
})
const data = connectResponseSchema().safeParse(await resp.json())
return {
config: { serverUrl, sessionId: data.session_id, wsUrl: data.ws_url, authToken },
workDir: data.work_dir,
}
}The DirectConnectSessionManager mirrors the RemoteSessionManager API but communicates entirely over a single WebSocket. Messages arrive as newline-delimited JSON (the --input-format stream-json protocol), and the manager filters out internal types (keep_alive, streamlined_text, post_turn_summary) that are irrelevant to the external consumer.
The permission flow is identical in structure to the remote case – control_request with can_use_tool triggers the callback, and the response is serialized as control_response – but the latency profile is fundamentally different. Direct-connect permission requests travel over localhost (sub-millisecond) rather than through cloud infrastructure (50-200ms), making the interactive experience noticeably snappier.
Upstream Proxy: Session-Token Bootstrap for Remote Containers
When Claude Code runs inside a CCR (Cloud Code Runtime) container, it faces a networking challenge: the container needs to make authenticated HTTPS requests to external services (Datadog, GitHub API) but sits behind a controlled egress gateway. The upstream proxy subsystem (src/upstreamproxy/, 2 files) solves this through a four-stage bootstrap:
Token acquisition: Read the one-time session token from
/run/ccr/session_token. This file is injected by the container orchestrator and contains a scoped credential.Process hardening: Call
prctl(PR_SET_DUMPABLE, 0)via Bun FFI to block same-UIDptraceof the process heap. This prevents a prompt-injectedgdb -p $PPIDfrom scraping the token from memory – a defense-in-depth measure against agent-mediated attacks.CA bundle construction: Download the proxy’s CA certificate from the CCR API and concatenate it with the system CA bundle. This allows
curl,gh, Pythonhttpx, and other tools to trust the MITM proxy’s TLS interception.Relay startup: Launch a local TCP server that accepts HTTP CONNECT requests and tunnels bytes over WebSocket to the CCR upstream proxy endpoint. The relay uses protobuf framing (
UpstreamProxyChunk) for compatibility with the gateway’s WebSocket stream adapter.
After the relay is running, the token file is unlinked – the token remains only in heap memory, and the file is gone before the agent loop can access it. If any step fails, the proxy is disabled and the session proceeds without it. This fail-open design ensures a broken proxy setup never breaks an otherwise-working session.
The relay exports environment variables that all child processes inherit:
export function getUpstreamProxyEnv(): Record<string, string> {
if (!state.enabled) return {}
const proxyUrl = `http://127.0.0.1:${state.port}`
return {
HTTPS_PROXY: proxyUrl, https_proxy: proxyUrl,
NO_PROXY: NO_PROXY_LIST, no_proxy: NO_PROXY_LIST,
SSL_CERT_FILE: state.caBundlePath,
NODE_EXTRA_CA_CERTS: state.caBundlePath,
REQUESTS_CA_BUNDLE: state.caBundlePath,
CURL_CA_BUNDLE: state.caBundlePath,
}
}The NO_PROXY list is carefully curated: localhost, RFC 1918 ranges, the IMDS range, Anthropic’s own API (which must not be intercepted), and major package registries. Three forms of anthropic.com (*.anthropic.com, .anthropic.com, anthropic.com) are listed because different HTTP clients parse NO_PROXY differently – Bun uses glob matching, Python’s urllib uses suffix matching, and some clients match only the apex domain.
Permission Flow Across Process Boundaries
The most architecturally demanding aspect of the remote runtime is preserving the permission model when the agent and the user interface live in different processes – potentially on different machines.
The Permission Callback Interface
bridgePermissionCallbacks.ts defines the protocol for permission requests that cross process boundaries:
type BridgePermissionCallbacks = {
sendRequest(requestId, toolName, input, toolUseId,
description, permissionSuggestions?, blockedPath?): void
sendResponse(requestId, response: BridgePermissionResponse): void
cancelRequest(requestId): void
onResponse(requestId, handler): () => void // returns unsubscribe
}The flow operates as follows:
- The agent’s tool execution hits a permission check
- The local permission system sees that bridge mode is active and invokes
sendRequest()instead of rendering a local prompt - The bridge serializes the request as a
control_requestwith subtypecan_use_tooland writes it to the transport - The remote client (claude.ai, VS Code, etc.) receives the request and renders a permission prompt in its own UI
- The user approves or denies
- The response travels back as a
control_response onResponse()resolves the pending promise, and tool execution continues
The cancelRequest() method handles the case where the agent moves on before the user responds – for example, if the user interrupts the session while a permission prompt is pending. Without cancellation, the remote UI would show a stale prompt for a tool that will never execute.
Env-Less Bridge: The v2 Protocol Path
The remoteBridgeCore.ts file implements the “env-less” bridge – a newer protocol path that eliminates the Environments API layer entirely. Instead of the multi-step register/poll/dispatch lifecycle, it uses a direct three-step handshake:
POST /v1/code/sessions(OAuth) – creates the sessionPOST /v1/code/sessions/{id}/bridge(OAuth) – returns a worker JWT and epoch number. Each/bridgecall bumps the epoch – it IS the worker registrationcreateV2ReplTransport(worker_jwt, epoch)– opens SSE + CCRClient
This eliminates heartbeating, deregistration, and the environment lifecycle entirely. Token refresh is proactive: a createTokenRefreshScheduler calls /bridge again before the JWT expires, receiving a fresh token and new epoch. If the SSE stream gets a 401, the transport rebuilds with fresh credentials while preserving the sequence number cursor – no event replay, no session interruption.
Spawn Modes: Session Isolation Strategies
The bridge supports three spawn modes that determine how sessions are isolated on the host machine:
| Mode | Working Directory | Lifecycle | Use Case |
|---|---|---|---|
single-session |
Current directory | Bridge tears down when session ends | Default for claude remote-control |
worktree |
Isolated git worktree per session | Persistent server, worktrees cleaned up | Multi-session on a git repo |
same-dir |
Shared current directory | Persistent server | Simple multi-session (can stomp) |
The worktree mode is the most sophisticated: for each incoming session, the bridge calls createAgentWorktree() to create a new git worktree, spawns the child CLI process in that worktree, and calls removeAgentWorktree() on session completion. This gives each concurrent session its own branch and working tree, preventing file-level conflicts between parallel agents working on the same repository.
Summary
The remote runtime layer reveals a recurring pattern in systems design: the cost of distribution is not networking – it is preserving invariants across process boundaries. The bridge, remote session, and transport layers together are ~14,800 lines of code. Of those, perhaps 2,000 handle actual network I/O. The rest handle authentication, echo deduplication, sequence number tracking, permission routing, session lifecycle, reconnection, and failure recovery.
The split between push (WebSocket/SSE) and pull (HTTP POST) channels is a design decision that recurs in many distributed systems. Slack’s real-time messaging API, Firebase’s Firestore listeners, and GraphQL subscriptions all separate the subscription channel from the mutation channel. The reason is always the same: reads and writes have different reliability requirements, different scaling characteristics, and different failure modes. A dropped subscription can be recovered by replaying from a cursor. A dropped write must be retried or the user must be notified. Mixing them on the same channel conflates these distinct concerns.
The upstream proxy’s defense-in-depth approach – prctl hardening, immediate token file deletion, fail-open on any error – reflects a mature security posture. The system assumes that the agent is a threat vector (it executes arbitrary code suggested by an LLM) and designs accordingly. The token never exists in a file and in memory simultaneously once bootstrap completes. The process cannot be ptraced even by same-user processes. And if any of this fails, the session continues without the proxy rather than crashing – because a session without external API access is still useful, while a crashed session is not.
Appendix A: Wire Protocol Message Types
Every message that crosses the bridge boundary is either a StdoutMessage (server → client) or a StdinMessage (client → server). These are the two union types defined in controlSchemas.ts that constitute the complete wire protocol. For the detailed schema of each individual SDK message type and control request subtype, see Post 15 Appendices.
StdoutMessage (Server → Client) — 8 variants
| Variant | Description | Volume |
|---|---|---|
SDKMessage |
Full conversation messages (user, assistant, system, result, tool_progress, etc.) |
High — bulk of traffic |
SDKStreamlinedTextMessage |
Optimized text-only format (brief mode) | Medium |
SDKStreamlinedToolUseSummaryMessage |
Optimized tool summary format (brief mode) | Medium |
SDKPostTurnSummaryMessage |
Summary emitted after each assistant turn | Low — once per turn |
SDKControlResponse |
Response to a client-initiated control request | Low |
SDKControlRequest |
Server-initiated commands (initialize, can_use_tool, interrupt, etc.) |
Low — permission prompts, lifecycle |
SDKControlCancelRequest |
Cancels a pending control request (e.g., stale permission prompt) | Rare |
SDKKeepAliveMessage |
Connection liveness signal | Periodic — every 30s |
StdinMessage (Client → Server) — 5 variants
| Variant | Description | Volume |
|---|---|---|
SDKUserMessage |
User-typed messages and attachments | Low — user-paced |
SDKControlRequest |
Client-initiated commands (interrupt, set_model, set_permission_mode) |
Low |
SDKControlResponse |
Response to a server-initiated request (permission allow/deny) | Low — one per prompt |
SDKKeepAliveMessage |
Connection liveness signal | Periodic |
SDKUpdateEnvironmentVariablesMessage |
Runtime environment variable injection | Rare |
Implementation: src/entrypoints/sdk/controlSchemas.ts (lines 642–663)
Appendix B: HTTP API Endpoints
The bridge communicates with Anthropic’s backend through REST endpoints in addition to the WebSocket/SSE channels. All endpoints use OAuth bearer tokens for authentication.
Bridge Environment Management
| Method | Endpoint | Purpose | Implementation |
|---|---|---|---|
POST |
/v1/environments/bridge |
Register bridge as a worker (returns environment_id) |
src/bridge/bridgeApi.ts |
DELETE |
/v1/environments/bridge/{environmentId} |
Deregister bridge (graceful shutdown) | src/bridge/bridgeApi.ts |
POST |
/v1/environments/{environmentId}/bridge/reconnect |
Reconnect session after bridge failure | src/bridge/bridgeApi.ts |
Work Dispatch (v1 protocol)
| Method | Endpoint | Purpose | Implementation |
|---|---|---|---|
GET |
/v1/environments/{environmentId}/work/poll |
Long-poll for new work items (sessions) | src/bridge/bridgeApi.ts |
POST |
/v1/environments/{environmentId}/work/{workId}/ack |
Acknowledge work receipt | src/bridge/bridgeApi.ts |
POST |
/v1/environments/{environmentId}/work/{workId}/stop |
Signal work completion or failure | src/bridge/bridgeApi.ts |
POST |
/v1/environments/{environmentId}/work/{workId}/heartbeat |
Extend work lease (prevents timeout) | src/bridge/bridgeApi.ts |
Session Management
| Method | Endpoint | Purpose | Implementation |
|---|---|---|---|
POST |
/v1/sessions/{sessionId}/events |
Send events to session (permission responses) | src/bridge/bridgeApi.ts |
POST |
/v1/sessions/{sessionId}/archive |
Archive completed session (409 if already archived) | src/bridge/bridgeApi.ts |
v2 Code Sessions API
| Method | Endpoint | Purpose | Implementation |
|---|---|---|---|
POST |
/v1/code/sessions |
Create a new code session (OAuth) | src/bridge/remoteBridgeCore.ts |
POST |
/v1/code/sessions/{id}/bridge |
Register as worker; returns worker JWT + epoch | src/bridge/remoteBridgeCore.ts |
GET |
/v1/code/sessions/{id}/events |
SSE event stream (worker JWT, from_sequence_num) |
src/cli/transports/SSETransport.ts |
POST |
/v1/code/sessions/{id}/events |
Post events to session (worker JWT) | src/cli/transports/CCRClient.ts |
Direct Connect (local server)
| Method | Endpoint | Purpose | Implementation |
|---|---|---|---|
POST |
/sessions |
Create session on local server | src/remote/directConnect.ts |
WS |
/sessions/ws/{sessionId} |
WebSocket connection for direct-connect | src/remote/directConnect.ts |
Appendix C: WebSocket Close Codes
The SessionsWebSocket handles close codes with differentiated retry behavior.
| Close Code | Meaning | Retry Behavior | Implementation |
|---|---|---|---|
4003 |
Unauthorized (permanent rejection) | No retry — connection terminated immediately | src/remote/SessionsWebSocket.ts |
4001 |
Session not found | Up to 3 retries with linear backoff (transient during compaction) | src/remote/SessionsWebSocket.ts |
| Other codes | Transient failure | Up to 5 retries with 2s delay | src/remote/SessionsWebSocket.ts |
| Normal close | Clean shutdown | No retry | src/remote/SessionsWebSocket.ts |