Streamable HTTP vs SSE: Why MCP Changed Transports (And How to Migrate)
MCP introduced Streamable HTTP in March 2025, replacing SSE. With SSE deprecation approaching, here is what changed and how to migrate.
Streamable HTTP vs SSE: Why MCP Changed Transports (And How to Migrate)
The Model Context Protocol shipped three transport mechanisms in under six months. The first spec (November 2024) had stdio only. The March 2025 update added Streamable HTTP and simultaneously deprecated the HTTP+SSE transport that had barely been standardized. SSE deprecation deadlines are arriving mid-2026.
This is not protocol churn for its own sake. The old SSE transport had fundamental architecture problems that made it hostile to load balancers, serverless platforms, and firewalls. Streamable HTTP fixes all of them with a simpler design.
Here is exactly what changed, why it matters, and how to migrate your servers before the deadline.
The Three MCP Transports Explained
MCP defines how AI clients (Claude Desktop, Cursor, VS Code Copilot) communicate with tool servers. The transport layer determines how JSON-RPC messages move between client and server. There are three options.
stdio: Local Subprocess
The client launches the MCP server as a child process. Messages flow over stdin/stdout. No network involved.
{
"mcpServers": {
"my-tool": {
"command": "npx",
"args": ["-y", "my-mcp-server"]
}
}
}
Best for: Local tools that read files, run shell commands, or query local databases. Every MCP client supports stdio, and it requires zero infrastructure.
Limitation: The server dies when the client closes. No sharing between users, no remote access, no horizontal scaling.
HTTP+SSE: The Deprecated Remote Transport
The original remote transport from the 2024-11-05 spec. It used two separate HTTP endpoints:
- GET
/sse-- The client opens a long-lived Server-Sent Events connection. The server immediately sends anendpointevent containing a URL (typically/messages?sessionId=abc123). - POST
/messages-- The client sends JSON-RPC requests here. Responses arrive back through the SSE stream from step 1.
Client Server
| |
|--- GET /sse ----------------->| (opens persistent SSE connection)
|<-- event: endpoint -----------| (server sends /messages?sid=xxx)
| |
|--- POST /messages?sid=xxx --->| (client sends JSON-RPC request)
|<-- SSE: JSON-RPC response ----| (response comes via SSE stream)
This worked, but the two-connection design created real problems (more on that below).
Streamable HTTP: The Current Standard
Introduced in the 2025-03-26 spec, Streamable HTTP replaces HTTP+SSE with a single-endpoint design. One URL (e.g., /mcp) handles everything:
- POST -- Client sends JSON-RPC messages. Server responds with either
application/json(single response) ortext/event-stream(SSE stream for multiple messages). - GET -- Client opens an SSE stream for server-initiated messages (notifications, requests). Optional.
- DELETE -- Client terminates the session. Optional.
Client Server
| |
|--- POST /mcp ---------------->| (JSON-RPC request)
|<-- application/json ----------| (simple response)
| |
|--- POST /mcp ---------------->| (long-running request)
|<-- text/event-stream ---------| (progress updates + final response)
| |
|--- GET /mcp ----------------->| (listen for server notifications)
|<-- text/event-stream ---------| (server pushes as needed)
Why SSE Is Being Deprecated
The old HTTP+SSE transport had five structural problems that Streamable HTTP eliminates.
1. Two connections, one session. The SSE stream and the POST endpoint had to be correlated by a session ID in the URL. If the SSE connection dropped, the client lost its channel for receiving responses. Reconnection required re-establishing the entire session.
2. Sticky sessions required. Because the SSE connection was stateful and long-lived, load balancers had to route all requests from a given client to the same server instance. This broke horizontal scaling and made blue-green deployments painful.
3. Serverless-hostile. Platforms like Cloudflare Workers, Vercel Functions, and AWS Lambda are designed for short-lived request/response cycles. A persistent SSE connection that sits idle for minutes between messages is the opposite of what serverless wants. The old transport forced "always-on" server infrastructure.
4. Firewall and proxy interference. Many corporate proxies and CDNs buffer or terminate long-lived HTTP connections. SSE streams would silently die behind aggressive proxies, causing mysterious failures that were difficult to debug.
5. One-way streaming. SSE is server-to-client only by design. The old transport bolted on client-to-server via a separate POST endpoint, but this created an asymmetric architecture that complicated error handling and flow control.
Streamable HTTP solves all five: single endpoint (no correlation needed), stateless option (no sticky sessions), request/response compatible (serverless-friendly), standard HTTP traffic patterns (proxy-safe), and true bidirectional communication on one connection.
How Streamable HTTP Works
The protocol is simpler than its predecessor. Here are the key mechanics.
Single Endpoint, Multiple Methods
The server exposes one URL. The client sends all JSON-RPC messages as POST requests to that URL. The Accept header must include both application/json and text/event-stream.
Response Modes
The server chooses how to respond based on the request:
- JSON mode: For simple request/response, return
Content-Type: application/jsonwith the JSON-RPC response. Fast, cacheable, serverless-compatible. - SSE mode: For long-running operations or when the server needs to send intermediate messages (progress updates, follow-up requests), return
Content-Type: text/event-stream. The stream includes the final response and then closes.
The client must handle both. This flexibility is why "Streamable HTTP" is a better name than "HTTP" -- servers can stream but are not forced to.
Session Management
Sessions are optional. When enabled, the server returns an Mcp-Session-Id header with the initialization response. The client includes this header on all subsequent requests.
POST /mcp
Content-Type: application/json
Mcp-Session-Id: 1868a90c-4f5b-4e3a-9c1d-7f2b8e6d3a12
{"jsonrpc": "2.0", "method": "tools/call", "params": {...}, "id": 1}
Stateless servers can skip session management entirely by setting sessionIdGenerator to undefined. This is ideal for serverless deployments where each request is independent.
Resumability
If the SSE connection drops mid-stream, the client can reconnect with a GET request including the Last-Event-ID header. The server replays missed events. This is built on the standard SSE reconnection mechanism -- no custom protocol needed.
Migration Guide: SSE to Streamable HTTP
Before (SSE Transport)
The old SSE transport required managing two endpoints and correlating connections:
import express from "express";
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
const app = express();
const server = new McpServer({ name: "my-server", version: "1.0.0" });
// Two separate endpoints required
const transports = new Map();
app.get("/sse", async (req, res) => {
const transport = new SSEServerTransport("/messages", res);
transports.set(transport.sessionId, transport);
await server.connect(transport);
});
app.post("/messages", async (req, res) => {
const sessionId = req.query.sessionId as string;
const transport = transports.get(sessionId);
await transport.handlePostMessage(req, res);
});
app.listen(3000);
After (Streamable HTTP Transport)
The new transport uses a single endpoint. Here is a stateless version ideal for serverless:
import express from "express";
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
const app = express();
app.use(express.json());
app.post("/mcp", async (req, res) => {
const server = new McpServer({ name: "my-server", version: "1.0.0" });
// Register your tools
server.tool("hello", { name: "string" }, async ({ name }) => ({
content: [{ type: "text", text: `Hello, ${name}!` }],
}));
const transport = new StreamableHTTPServerTransport({
sessionIdGenerator: undefined, // stateless
});
await server.connect(transport);
await transport.handleRequest(req, res, req.body);
});
// Reject GET/DELETE for stateless servers
app.get("/mcp", (req, res) => res.status(405).end());
app.delete("/mcp", (req, res) => res.status(405).end());
app.listen(3000, "127.0.0.1");
For a stateful server with session management:
import { randomUUID } from "crypto";
const sessions = new Map();
app.post("/mcp", async (req, res) => {
const sessionId = req.headers["mcp-session-id"] as string;
if (sessionId && sessions.has(sessionId)) {
// Existing session -- reuse transport
const transport = sessions.get(sessionId);
await transport.handleRequest(req, res, req.body);
} else {
// New session -- create server + transport
const server = new McpServer({ name: "my-server", version: "1.0.0" });
const transport = new StreamableHTTPServerTransport({
sessionIdGenerator: () => randomUUID(),
});
server.tool("hello", { name: "string" }, async ({ name }) => ({
content: [{ type: "text", text: `Hello, ${name}!` }],
}));
await server.connect(transport);
sessions.set(transport.sessionId, transport);
await transport.handleRequest(req, res, req.body);
}
});
app.get("/mcp", async (req, res) => {
const sessionId = req.headers["mcp-session-id"] as string;
const transport = sessions.get(sessionId);
if (transport) {
await transport.handleRequest(req, res);
} else {
res.status(400).end();
}
});
Migration Checklist
- Update the SDK -- Streamable HTTP support was added in
@modelcontextprotocol/sdkv1.10.0 (April 2025). Runnpm install @modelcontextprotocol/sdk@latest. - Replace the transport import -- Swap
SSEServerTransportforStreamableHTTPServerTransport. - Merge endpoints -- Consolidate
/sse+/messagesinto a single/mcpendpoint. - Update client config -- Change the server URL from
http://localhost:3000/ssetohttp://localhost:3000/mcp. - Test with fallback -- During transition, you can host both transports simultaneously on different paths. The spec recommends this for backward compatibility.
When to Use Each Transport
| Factor | stdio | Streamable HTTP |
|---|---|---|
| Deployment | Local subprocess | Remote server (any cloud) |
| Multi-user | No (1 client = 1 process) | Yes (single server, many clients) |
| Auth/permissions | Inherits OS user | OAuth 2.1, API keys, custom |
| Scaling | Vertical only | Horizontal (stateless mode) |
| Serverless | No | Yes (Cloudflare Workers, Vercel, AWS Lambda) |
| Setup complexity | Minimal | Requires HTTP server |
| Best for | Dev tools, local file access, CLI integrations | SaaS integrations, shared team tools, production APIs |
Rule of thumb: If the tool only needs to run on the developer's machine, use stdio. If anyone else needs to access it -- teammates, CI/CD pipelines, production apps -- use Streamable HTTP.
New to MCP? Read What is an MCP server? for the fundamentals, or How to build an MCP server for a step-by-step tutorial.
Where to Host Streamable HTTP Servers
Streamable HTTP was designed for modern cloud infrastructure. All major platforms support it:
- Cloudflare Workers -- Edge deployment with
McpAgentclass for Durable Objects session management. 100,000 requests/day free tier. Fastest cold starts. - Vercel Functions -- Native Next.js integration. Stateless mode works out of the box with Fluid Compute.
- AWS Lambda -- Behind API Gateway, stateless Streamable HTTP maps directly to Lambda's request/response model.
- Any Express/Node.js host -- Railway, Fly.io, Render, self-hosted VPS. The SDK's Express integration works everywhere Node runs.
For a full deployment walkthrough, see How to deploy a remote MCP server.
Client Support Matrix (March 2026)
| Client | stdio | SSE (Legacy) | Streamable HTTP | OAuth 2.1 |
|---|---|---|---|---|
| Claude Desktop | Yes | Yes | Yes | Yes |
| Claude Code | Yes | Yes | Yes | Yes |
| Cursor | Yes | Yes | Yes | Yes |
| VS Code (Copilot) | Yes | Yes | Yes | Yes |
| Windsurf | Yes | Yes | Yes | Partial |
| ChatGPT | No | No | Yes (remote only) | Yes |
| OpenAI Agents SDK | Yes | Yes | Yes | Yes |
| Cline | Yes | Yes | Yes | No |
All major clients now support Streamable HTTP. ChatGPT is notable for supporting only remote Streamable HTTP (no stdio, no SSE). If you build with Streamable HTTP, you cover every client. If you only support SSE, you already miss ChatGPT and will miss more as the deprecation deadline passes.
The Timeline
- November 2024 -- MCP spec 2024-11-05 ships with stdio and HTTP+SSE.
- March 2025 -- Spec 2025-03-26 introduces Streamable HTTP, deprecates HTTP+SSE.
- April 2025 -- TypeScript SDK v1.10.0 adds
StreamableHTTPServerTransport. - Mid-2026 -- SSE transport removal deadlines arrive. Atlassian Rovo: June 30, 2026. Keboola: April 1, 2026. More will follow.
The writing is on the wall. Streamable HTTP is not just recommended -- it is the only remote transport with a future in MCP. Start migrating now while you can run both transports in parallel, and cut over before the deadlines hit.
Build and deploy your own MCP server with our guides: What is an MCP server? | How to build an MCP server | Deploy a remote MCP server
Related Articles
Best Database Tools in 2026
Best Database Tools in 2026
MCP Server Authentication: OAuth 2.1, API Keys, and Security Best Practices
MCP Server Authentication: OAuth 2.1, API Keys, and Security Best Practices
How to authenticate MCP servers — env vars for local, OAuth 2.1 for remote. Covers PKCE, client-credentials, and the CVE that broke mcp-remote.
Best MCP Servers for Marketing Teams: HubSpot, Salesforce, Ahrefs, and More
Best MCP Servers for Marketing Teams: HubSpot, Salesforce, Ahrefs, and More
MCP servers for marketers — CRM, SEO, email, analytics. Setup guides for HubSpot, Salesforce, Ahrefs, and more.