Skip to content

@modelcontextprotocol/sdk has cross-client data leak via shared server/transport instance reuse

High severity GitHub Reviewed Published Feb 4, 2026 in modelcontextprotocol/typescript-sdk • Updated Feb 9, 2026

Package

npm @modelcontextprotocol/sdk (npm)

Affected versions

>= 1.10.0, <= 1.25.3

Patched versions

1.26.0

Description

Summary

Cross-client data leak via two distinct issues: (1) reusing a single StreamableHTTPServerTransport across multiple client requests, and (2) reusing a single McpServer/Server instance across multiple transports. Both are most common in stateless deployments.

Impact

This advisory covers two related but distinct vulnerabilities. A deployment may be affected by one or both.

Issue 1: Transport re-use

What happens: When a single StreamableHTTPServerTransport instance handles multiple client requests, JSON-RPC message ID collisions cause responses to be routed to the wrong client's HTTP connection. The transport maintains an internal requestId → stream mapping, and since MCP client SDKs generate message IDs using an incrementing counter starting at 0, two clients produce identical IDs. The second client's request overwrites the first client's mapping entry, routing the response to the wrong HTTP stream.

What is affected: All request types — tools/call, resources/read, prompts/get, etc. No server-initiated features are required to trigger this.

Conditions:

  • A single StreamableHTTPServerTransport instance is reused across multiple client requests (most common in stateless mode without sessionIdGenerator)
  • Two or more clients send requests concurrently
  • Clients generate overlapping JSON-RPC message IDs (the SDK's default client uses an incrementing counter starting at 0)

Issue 2: Server/Protocol re-use

What happens: When a single McpServer (or Server) instance is connect()ed to multiple transports (one per client), the Protocol's internal this._transport reference is silently overwritten. The final response to a request is routed correctly (the Protocol captures the transport reference at request time), but any server-to-client messages sent during request handling use the shared this._transport reference, which may point to a different client's transport.

What is affected: This depends on what features your server uses:

  • Final responses (the return value from a tool/resource/prompt handler): Affected in most cases. The Protocol captures this._transport at request-handling time, not the transport that delivered the request. This means:
    • If a request is already in-flight when a second connect() occurs (i.e., the request
      arrived before the transport was overwritten), the captured reference is correct and
      the response routes properly.
    • If a request arrives on the old transport after a second connect() has overwritten
      this._transport, the captured reference points to the new transport, and the response
      is mis-routed. The requesting client will time out.
  • Progress notifications sent during tool execution via sendNotification: Affected. These are dispatched through this._transport. When the transport has been overwritten and message IDs collide on the new transport, notifications are routed to the wrong client's HTTP stream.
  • Sampling (createMessage) and elicitation requests sent during tool execution via sendRequest: Affected. Same mechanism — the request is sent to the wrong client.
  • Spontaneous server-initiated notifications (outside any request handler): Affected. These are sent to whichever client's transport was most recently connected.

Conditions:

  • A single McpServer/Server instance is connect()ed to multiple transports across requests or sessions
  • Two or more clients connect concurrently
  • For in-request notifications/requests: message ID collision on the other transport is required for silent data leaking (the SDK's default client uses an incrementing counter starting at 0). Without collision, the transport will throw an error rather than misroute.
  • For spontaneous notifications: no collision needed, messages are always sent to the last-connected client's transport

How to tell if you're affected

  • You use sessionIdGenerator (stateful mode) with a new McpServer per session → not affected by either issue. Each session has its own transport and server instance.
  • You use sessionIdGenerator but share a single McpServer across sessions → not affected by Issue 1 (transport re-use), but affected by Issue 2 (server re-use) if your tools send progress notifications, sampling, or elicitation during execution.
  • You are in stateless mode and reuse both a transport and a server across requests → affected by both issues; all request types can leak.
  • You are in stateless mode and create a new transport per request, but reuse the server → affected by Issue 2 only; safe if your tools only return results without sending progress notifications, sampling, or elicitation during execution.
  • You create a new server + transport per request → not affected.
  • Single-client environments (e.g., local development with one IDE) → not affected.

Patches

The fix (v1.26.0) adds runtime guards that turn silent data misrouting into immediate, actionable errors:

  1. Protocol.connect() now throws if the protocol is already connected to a transport, preventing silent transport overwriting (addresses Issue 2)
  2. Stateless StreamableHTTPServerTransport.handleRequest() now throws if called more than once, enforcing one-request-per-transport in stateless mode (addresses Issue 1)
  3. In-flight request handler abort controllers are cleaned up on close(), and sendNotification/sendRequest in handler extras check the abort signal before sending, preventing messages from leaking after a transport is replaced

Servers that were incorrectly reusing instances will now receive a clear error message directing them to create separate instances per connection.

Workarounds

If you cannot upgrade immediately, ensure your server creates fresh McpServer and transport instances for each request (stateless) or session (stateful):

// Stateless mode: create new server + transport per request
app.post('/mcp', async (req, res) => {
  const server = new McpServer({ name: 'my-server', version: '1.0.0' });
  // ... register tools, resources, etc.
  const transport = new StreamableHTTPServerTransport({ sessionIdGenerator: undefined });
  await server.connect(transport);
  await transport.handleRequest(req, res);
});

// Stateful mode: create new server + transport per session
const sessions = new Map();
app.post('/mcp', async (req, res) => {
  const sessionId = req.headers['mcp-session-id'];
  if (sessions.has(sessionId)) {
    await sessions.get(sessionId).transport.handleRequest(req, res);
  } else {
    const server = new McpServer({ name: 'my-server', version: '1.0.0' });
    // ... register tools, resources, etc.
    const transport = new StreamableHTTPServerTransport({
      sessionIdGenerator: () => randomUUID()
    });
    await server.connect(transport);
    sessions.set(transport.sessionId, { server, transport });
    await transport.handleRequest(req, res);
  }
});

References

Published to the GitHub Advisory Database Feb 4, 2026
Reviewed Feb 4, 2026
Published by the National Vulnerability Database Feb 4, 2026
Last updated Feb 9, 2026

Severity

High

CVSS overall score

This score calculates overall vulnerability severity from 0 to 10 and is based on the Common Vulnerability Scoring System (CVSS).
/ 10

CVSS v3 base metrics

Attack vector
Network
Attack complexity
Low
Privileges required
Low
User interaction
None
Scope
Unchanged
Confidentiality
High
Integrity
Low
Availability
None

CVSS v3 base metrics

Attack vector: More severe the more the remote (logically and physically) an attacker can be in order to exploit the vulnerability.
Attack complexity: More severe for the least complex attacks.
Privileges required: More severe if no privileges are required.
User interaction: More severe when no user interaction is required.
Scope: More severe when a scope change occurs, e.g. one vulnerable component impacts resources in components beyond its security scope.
Confidentiality: More severe when loss of data confidentiality is highest, measuring the level of data access available to an unauthorized user.
Integrity: More severe when loss of data integrity is the highest, measuring the consequence of data modification possible by an unauthorized user.
Availability: More severe when the loss of impacted component availability is highest.
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:L/A:N

EPSS score

Exploit Prediction Scoring System (EPSS)

This score estimates the probability of this vulnerability being exploited within the next 30 days. Data provided by FIRST.
(7th percentile)

Weaknesses

Concurrent Execution using Shared Resource with Improper Synchronization ('Race Condition')

The product contains a concurrent code sequence that requires temporary, exclusive access to a shared resource, but a timing window exists in which the shared resource can be modified by another code sequence operating concurrently. Learn more on MITRE.

CVE ID

CVE-2026-25536

GHSA ID

GHSA-345p-7cg4-v4c7

Credits

Loading Checking history
See something to contribute? Suggest improvements for this vulnerability.