TL;DR
- Long polling: easiest to bolt on any backend; client opens a request that the server holds until there’s data (or a timeout), then immediately reconnects. Stateless but chattier and higher overhead.
- Server‑Sent Events (SSE / EventSource): one‑way server→browser stream over HTTP with auto‑reconnect and simple text framing. Great for live feeds, logs, notifications.
- WebSockets: two‑way full‑duplex TCP over an HTTP upgrade. Best for interactive apps (chat, collaborative editors, multiplayer), custom protocols, and binary data.
- Start simple: SSE for push‑only; WebSockets for interactive; long polling as a universal fallback or when infra limits prevent persistent connections.
Mental model (60 seconds)
Client ─────HTTP─────► Server
Long polling: request ────(hold)──▶ response (data/timeout) → reconnect → …
SSE (HTTP stream): request ─► [data:\n]\n events keep streaming until closed
WebSocket: HTTP Upgrade → persistent, bi‑directional messages both ways
- Long polling: normal HTTP request/response loop; server waits up to N seconds for data.
- SSE:
text/event-streamover HTTP/1.1+; messages likedata: {...}\n\n; browser auto‑reconnects withLast-Event-ID. - WebSocket:
ws:///wss://after an HTTP upgrade; frames carry text or binary; ping/pong keepalive.
Feature comparison
| Aspect | Long polling | SSE (EventSource) | WebSocket | |---|---|---|---| | Direction | Client⇢Server request; data in response | Server⇢Client only | Bi‑directional | | Latency | Low–moderate (reconnect gaps) | Low (stream) | Very low (duplex) | | Overhead | High (many HTTP round‑trips) | Low | Low | | Binary support | Indirect (base64) | Text only (UTF‑8) | Yes (binary frames) | | Browser support | Universal | Good (no legacy IE) | Universal | | Backpressure | Via short responses | Stream pacing; simple | Requires protocol handling | | Proxies/CDNs | Easy; just HTTP | Usually fine; some proxies buffer | Needs upgrade pass‑through | | Scaling | Easy stateless | Similar to long‑lived HTTP; track connections | Stateful; often needs sticky sessions | | Best for | Simple notifications | Feeds, progress, logs, prices | Chat, collab, gaming, custom RPC |
Minimal code snippets
Long polling
Client (JS)
async function poll() {
while (true) {
const res = await fetch("/updates?since=" + lastSeen, { credentials: "include" });
if (res.status === 204) continue;
const items = await res.json();
items.forEach(handle);
lastSeen = items.at(-1)?.id ?? lastSeen;
}
}
poll().catch(console.error);
Server (Node/Express)
app.get("/updates", async (req, res) => {
const since = req.query.since;
const item = await waitForNextItem(since, { timeoutMs: 25000 }); // long hold
if (!item) return res.sendStatus(204); // no content, client reconnects
res.json([item]); // or a batch
});
Server‑Sent Events (SSE)
Client (JS)
const es = new EventSource("/stream"); // auto-reconnects
es.onmessage = (ev) => handle(JSON.parse(ev.data));
es.addEventListener("price", (ev) => handlePrice(JSON.parse(ev.data)));
es.onerror = (e) => console.warn("SSE error", e);
Server (Node/Express)
app.get("/stream", (req, res) => {
res.setHeader("Content-Type", "text/event-stream");
res.setHeader("Cache-Control", "no-cache");
res.setHeader("Connection", "keep-alive");
const send = (event, data) => {
if (event) res.write(`event: ${event}
`);
res.write(`data: ${JSON.stringify(data)}
`);
};
const heartbeat = setInterval(() => res.write(":keep-alive
"), 15000);
const unsub = bus.on("price", (p) => send("price", p));
req.on("close", () => { clearInterval(heartbeat); unsub(); });
});
WebSockets
Client (JS)
const ws = new WebSocket("wss://example.com/ws");
ws.onopen = () => ws.send(JSON.stringify({ type: "hello" }));
ws.onmessage = (ev) => handle(JSON.parse(ev.data));
ws.onclose = () => retrySoon();
Server (Node + ws)
import { WebSocketServer } from "ws";
const wss = new WebSocketServer({ server: httpServer, path: "/ws" });
wss.on("connection", (socket) => {
socket.on("message", (buf) => {
const msg = JSON.parse(buf.toString());
// echo or route
socket.send(JSON.stringify({ ok: true, t: Date.now() }));
});
const ping = setInterval(() => { socket.ping(); }, 30000);
socket.on("close", () => clearInterval(ping));
});
Choosing wisely (rules of thumb)
- Push‑only updates? → SSE (simplest, great DX).
- Interactive two‑way? → WebSocket (chat, cursors, multiplayer).
- Can’t keep connections open or infra is HTTP‑only? → Long polling.
- Binary or custom protocol? → WebSocket.
- Need browser auto‑reconnect + event IDs? → SSE (
Last-Event-ID).
Scaling & ops notes
- Connection counts: persistent connections consume memory/file descriptors; size your server or use a gateway that supports streaming and WS.
- Load balancing: WS and SSE are stateful; prefer sticky sessions or externalize state (pub/sub) so any node can serve updates.
- Heartbeats: send periodic ping/pong (WS) or comments (
:keep-alivein SSE) to keep intermediaries from closing idle links. - Backpressure: throttle broadcast rates; consider per‑client queues and drop policies.
- HTTP/2 & HTTP/3: both play well with SSE; WS works via specific extensions (still an upgrade).
- CORS & security: treat endpoints like APIs; validate origins, auth tokens, and apply rate limits. For WS, implement origin checks at upgrade.
Pitfalls & fixes
| Problem | Why it hurts | Fix | |---|---|---| | Long polling floods on high churn | Too many connects/responses | Increase hold time, batch updates, or move to SSE/WS | | Proxy buffering SSE | Delayed delivery | Send heartbeats; set no buffering headers where supported | | Dropped WS connections | Idle timeouts/NAT | Heartbeats + exponential backoff reconnect | | Ordering issues | Out‑of‑order messages | Add monotonic IDs/timestamps; client reorder | | Memory blowups | Slow consumers | Cap per‑client queue; drop oldest; backpressure | | Auth drift on long‑lived links | Revoked tokens still active | Short‑lived auth + re‑auth, or server‑side session validation |
Quick checklist
- [ ] Pick SSE for simple server→client streams; WebSocket for two‑way.
- [ ] Implement reconnect (SSE built‑in; WS manual).
- [ ] Add heartbeats to survive proxies/NAT.
- [ ] Plan for backpressure and ordering.
- [ ] Secure with auth, origin/CORS, and rate limits.
- [ ] Design for sticky sessions or shared pub/sub for horizontal scale.
One‑minute adoption plan
- Prototype with SSE or WS locally; measure latency and server resource usage.
- Add reconnect with exponential backoff and jitter.
- Introduce heartbeats and test behind your proxy/CDN.
- Externalize fan‑out with pub/sub (e.g., Redis, NATS, Kafka) before scaling out.
- Add metrics: open connections, msgs/sec, queue depth, reconnects, p95 latency.