Summary
Cross-client data leak via two distinct issues: (1) reusing a single StreamableHTTPServerTransport across multiple client requests, and (2) reusing a single McpServer/Server instance across multiple transports. Both are most common in stateless deployments.
Impact
This advisory covers two related but distinct vulnerabilities. A deployment may be affected by one or both.
Issue 1: Transport re-use
What happens: When a single StreamableHTTPServerTransport instance handles multiple client requests, JSON-RPC message ID collisions cause responses to be routed to the wrong client's HTTP connection. The transport maintains an internal requestId → stream mapping, and since MCP client SDKs generate message IDs using an incrementing counter starting at 0, two clients produce identical IDs. The second client's request overwrites the first client's mapping entry, routing the response to the wrong HTTP stream.
What is affected: All request types — tools/call, resources/read, prompts/get, etc. No server-initiated features are required to trigger this.
Conditions:
- A single
StreamableHTTPServerTransport instance is reused across multiple client requests (most common in stateless mode without sessionIdGenerator)
- Two or more clients send requests concurrently
- Clients generate overlapping JSON-RPC message IDs (the SDK's default client uses an incrementing counter starting at 0)
Issue 2: Server/Protocol re-use
What happens: When a single McpServer (or Server) instance is connect()ed to multiple transports (one per client), the Protocol's internal this._transport reference is silently overwritten. The final response to a request is routed correctly (the Protocol captures the transport reference at request time), but any server-to-client messages sent during request handling use the shared this._transport reference, which may point to a different client's transport.
What is affected: This depends on what features your server uses:
- Final responses (the return value from a tool/resource/prompt handler): Affected in most cases. The Protocol captures this._transport at request-handling time, not the transport that delivered the request. This means:
- If a request is already in-flight when a second connect() occurs (i.e., the request
arrived before the transport was overwritten), the captured reference is correct and
the response routes properly.
- If a request arrives on the old transport after a second connect() has overwritten
this._transport, the captured reference points to the new transport, and the response
is mis-routed. The requesting client will time out.
- Progress notifications sent during tool execution via
sendNotification: Affected. These are dispatched through this._transport. When the transport has been overwritten and message IDs collide on the new transport, notifications are routed to the wrong client's HTTP stream.
- Sampling (
createMessage) and elicitation requests sent during tool execution via sendRequest: Affected. Same mechanism — the request is sent to the wrong client.
- Spontaneous server-initiated notifications (outside any request handler): Affected. These are sent to whichever client's transport was most recently connected.
Conditions:
- A single
McpServer/Server instance is connect()ed to multiple transports across requests or sessions
- Two or more clients connect concurrently
- For in-request notifications/requests: message ID collision on the other transport is required for silent data leaking (the SDK's default client uses an incrementing counter starting at 0). Without collision, the transport will throw an error rather than misroute.
- For spontaneous notifications: no collision needed, messages are always sent to the last-connected client's transport
How to tell if you're affected
- You use
sessionIdGenerator (stateful mode) with a new McpServer per session → not affected by either issue. Each session has its own transport and server instance.
- You use
sessionIdGenerator but share a single McpServer across sessions → not affected by Issue 1 (transport re-use), but affected by Issue 2 (server re-use) if your tools send progress notifications, sampling, or elicitation during execution.
- You are in stateless mode and reuse both a transport and a server across requests → affected by both issues; all request types can leak.
- You are in stateless mode and create a new transport per request, but reuse the server → affected by Issue 2 only; safe if your tools only return results without sending progress notifications, sampling, or elicitation during execution.
- You create a new server + transport per request → not affected.
- Single-client environments (e.g., local development with one IDE) → not affected.
Patches
The fix (v1.26.0) adds runtime guards that turn silent data misrouting into immediate, actionable errors:
Protocol.connect() now throws if the protocol is already connected to a transport, preventing silent transport overwriting (addresses Issue 2)
- Stateless
StreamableHTTPServerTransport.handleRequest() now throws if called more than once, enforcing one-request-per-transport in stateless mode (addresses Issue 1)
- In-flight request handler abort controllers are cleaned up on
close(), and sendNotification/sendRequest in handler extras check the abort signal before sending, preventing messages from leaking after a transport is replaced
Servers that were incorrectly reusing instances will now receive a clear error message directing them to create separate instances per connection.
Workarounds
If you cannot upgrade immediately, ensure your server creates fresh McpServer and transport instances for each request (stateless) or session (stateful):
// Stateless mode: create new server + transport per request
app.post('/mcp', async (req, res) => {
const server = new McpServer({ name: 'my-server', version: '1.0.0' });
// ... register tools, resources, etc.
const transport = new StreamableHTTPServerTransport({ sessionIdGenerator: undefined });
await server.connect(transport);
await transport.handleRequest(req, res);
});
// Stateful mode: create new server + transport per session
const sessions = new Map();
app.post('/mcp', async (req, res) => {
const sessionId = req.headers['mcp-session-id'];
if (sessions.has(sessionId)) {
await sessions.get(sessionId).transport.handleRequest(req, res);
} else {
const server = new McpServer({ name: 'my-server', version: '1.0.0' });
// ... register tools, resources, etc.
const transport = new StreamableHTTPServerTransport({
sessionIdGenerator: () => randomUUID()
});
await server.connect(transport);
sessions.set(transport.sessionId, { server, transport });
await transport.handleRequest(req, res);
}
});
References
Summary
Cross-client data leak via two distinct issues: (1) reusing a single
StreamableHTTPServerTransportacross multiple client requests, and (2) reusing a singleMcpServer/Serverinstance across multiple transports. Both are most common in stateless deployments.Impact
This advisory covers two related but distinct vulnerabilities. A deployment may be affected by one or both.
Issue 1: Transport re-use
What happens: When a single
StreamableHTTPServerTransportinstance handles multiple client requests, JSON-RPC message ID collisions cause responses to be routed to the wrong client's HTTP connection. The transport maintains an internalrequestId → streammapping, and since MCP client SDKs generate message IDs using an incrementing counter starting at 0, two clients produce identical IDs. The second client's request overwrites the first client's mapping entry, routing the response to the wrong HTTP stream.What is affected: All request types —
tools/call,resources/read,prompts/get, etc. No server-initiated features are required to trigger this.Conditions:
StreamableHTTPServerTransportinstance is reused across multiple client requests (most common in stateless mode withoutsessionIdGenerator)Issue 2: Server/Protocol re-use
What happens: When a single
McpServer(orServer) instance isconnect()ed to multiple transports (one per client), the Protocol's internalthis._transportreference is silently overwritten. The final response to a request is routed correctly (the Protocol captures the transport reference at request time), but any server-to-client messages sent during request handling use the sharedthis._transportreference, which may point to a different client's transport.What is affected: This depends on what features your server uses:
arrived before the transport was overwritten), the captured reference is correct and
the response routes properly.
this._transport, the captured reference points to the new transport, and the response
is mis-routed. The requesting client will time out.
sendNotification: Affected. These are dispatched throughthis._transport. When the transport has been overwritten and message IDs collide on the new transport, notifications are routed to the wrong client's HTTP stream.createMessage) and elicitation requests sent during tool execution viasendRequest: Affected. Same mechanism — the request is sent to the wrong client.Conditions:
McpServer/Serverinstance isconnect()ed to multiple transports across requests or sessionsHow to tell if you're affected
sessionIdGenerator(stateful mode) with a newMcpServerper session → not affected by either issue. Each session has its own transport and server instance.sessionIdGeneratorbut share a singleMcpServeracross sessions → not affected by Issue 1 (transport re-use), but affected by Issue 2 (server re-use) if your tools send progress notifications, sampling, or elicitation during execution.Patches
The fix (v1.26.0) adds runtime guards that turn silent data misrouting into immediate, actionable errors:
Protocol.connect()now throws if the protocol is already connected to a transport, preventing silent transport overwriting (addresses Issue 2)StreamableHTTPServerTransport.handleRequest()now throws if called more than once, enforcing one-request-per-transport in stateless mode (addresses Issue 1)close(), andsendNotification/sendRequestin handler extras check the abort signal before sending, preventing messages from leaking after a transport is replacedServers that were incorrectly reusing instances will now receive a clear error message directing them to create separate instances per connection.
Workarounds
If you cannot upgrade immediately, ensure your server creates fresh
McpServerand transport instances for each request (stateless) or session (stateful):References