Disclaimer: This blog post is automatically generated from project documentation and technical proposals using AI assistance. The content represents our development journey and architectural decisions. Code examples are simplified illustrations and may not reflect the exact production implementation.
Table of Contents
- The Connection Problem
- Singleton Connection Managers
- Channel-Based Message Routing
- StreamConnection Implementation
- EventConnection Implementation
- Component Integration
- Server-Side Changes
- Results and Benefits
- Key Learnings
The Connection Problem
When Caroline and I were building the recorder page for Scores, we noticed something alarming: each web component was creating its own connection to the server. The audio recorder had a WebSocket, the manual input component had another WebSocket, each of the four biometric monitors had their own WebSocket, and the score dashboard, match highlights, and real-time updates each had EventSource connections.
That’s 9+ connections per page. With just a few users, the server was managing dozens of connections, each with its own TCP handshake, HTTP headers, and keep-alive overhead.
graph TB
subgraph "Old Architecture"
A[audio-recorder] --> WS1[WebSocket 1]
B[manual-input] --> WS2[WebSocket 2]
C[biometric-mock player 1] --> WS3[WebSocket 3]
D[biometric-mock player 2] --> WS4[WebSocket 4]
E[biometric-mock player 3] --> WS5[WebSocket 5]
F[biometric-mock player 4] --> WS6[WebSocket 6]
G[match-score] --> ES1[EventSource 1]
H[match-dashboard] --> ES2[EventSource 2]
I[match-highlights] --> ES3[EventSource 3]
end
style WS1 fill:#ffcdd2
style WS2 fill:#ffcdd2
style WS3 fill:#ffcdd2
style WS4 fill:#ffcdd2
style WS5 fill:#ffcdd2
style WS6 fill:#ffcdd2
style ES1 fill:#ffcdd2
style ES2 fill:#ffcdd2
style ES3 fill:#ffcdd2
We needed a solution that would pool connections while keeping components independent.
Singleton Connection Managers
Caroline suggested using singleton classes—one for WebSocket connections (bidirectional streaming) and one for EventSource connections (server-sent events). Each component registers with the singleton instead of creating its own connection.
Here’s the architecture:
graph TB
subgraph "New Architecture"
SC[StreamConnection Singleton]
EC[EventConnection Singleton]
A[audio-recorder] --> SC
B[manual-input] --> SC
C[biometric player 1] --> SC
D[biometric player 2] --> SC
E[biometric player 3] --> SC
F[biometric player 4] --> SC
G[match-score] --> EC
H[match-dashboard] --> EC
I[match-highlights] --> EC
end
SC --> WS[Single WebSocket]
EC --> ES[Single EventSource]
style WS fill:#c8e6c9
style ES fill:#c8e6c9
Now we have just 2 connections instead of 9+, regardless of how many components are on the page.
Event Distribution Architecture
The connection managers use an event distribution layer based on the browser’s native EventTarget API. Components subscribe to specific channels, and the manager routes messages to the appropriate subscribers.
Here’s how the StreamConnection manager works:
export class StreamConnection extends EventTarget {
#ws = null;
#messageHandlers = new Map(); // channelId -> Set<handler>
subscribe(channelId, handler) {
if (!this.#messageHandlers.has(channelId)) {
this.#messageHandlers.set(channelId, new Set());
}
this.#messageHandlers.get(channelId).add(handler);
}
send(channelId, data) {
if (this.#ws?.readyState === WebSocket.OPEN) {
this.#ws.send(JSON.stringify({ channel: channelId, data }));
}
}
#handleMessage(event) {
const message = JSON.parse(event.data);
const { channel, data } = message;
// Route to subscribed handlers
const handlers = this.#messageHandlers.get(channel);
if (handlers) {
handlers.forEach((handler) => handler(data));
}
}
}
Components use the manager like this:
// In a web component
const connection = StreamConnection.getInstance();
connection.subscribe('audio-recorder', (data) => {
this.handleAudioData(data);
});
// Send data through the shared connection
connection.send('audio-recorder', audioBuffer);
Implementation Details
Reconnection Logic
One of the tricky parts was handling reconnections. If the WebSocket disconnects, all components should automatically reconnect without manual intervention:
#reconnect() {
if (this.#reconnectAttempts >= this.#maxReconnectAttempts) {
this.#status = 'failed';
this.#dispatchStatusEvent();
return;
}
const delay = this.#reconnectDelay * Math.pow(2, this.#reconnectAttempts);
this.#reconnectAttempts++;
setTimeout(() => {
this.connect();
}, delay);
}
This exponential backoff prevents overwhelming the server with reconnection attempts while still recovering quickly from transient failures.
Binary Data Handling
The audio recorder sends binary PCM data, which needs special handling. We prepend a JSON header to identify the channel:
sendBinary(channelId, arrayBuffer) {
const header = new TextEncoder().encode(
JSON.stringify({ channel: channelId }) + '\n'
);
const combined = new Uint8Array(header.length + arrayBuffer.byteLength);
combined.set(header, 0);
combined.set(new Uint8Array(arrayBuffer), header.length);
this.#ws.send(combined);
}
Server-Side Routing
On the server, we updated the WebSocket handler to route messages based on the channel:
ws.on('message', (buffer) => {
// JSON messages
if (isJSON(buffer)) {
const { channel, data } = JSON.parse(buffer.toString());
if (channel === 'audio-recorder') {
handleAudio(data);
} else if (channel.startsWith('player-')) {
handleBiometric(channel, data);
}
return;
}
// Binary messages (audio)
const headerEnd = buffer.indexOf('\n');
const header = JSON.parse(buffer.slice(0, headerEnd).toString());
const payload = buffer.slice(headerEnd + 1);
if (header.channel === 'audio-recorder') {
audioStream.write(payload);
}
});
Status Events
Components need to know when the connection state changes (connecting, open, closed, error). We use custom events:
#dispatchStatusEvent() {
this.dispatchEvent(new CustomEvent('status', {
detail: { status: this.#status }
}));
}
// Components listen for status changes
connection.addEventListener('status', (event) => {
if (event.detail.status === 'open') {
this.enableFeatures();
} else {
this.disableFeatures();
}
});
Benefits and Trade-offs
Benefits
Resource Efficiency: With N users, we went from 9N connections to 2N connections. At 10 concurrent users, that’s 90 connections down to 20—a 78% reduction.
Simplified Client Code: Components don’t manage connections, reconnection logic, or error handling. They just subscribe and send/receive data.
Centralized Monitoring: Connection state is in one place. We can add logging, metrics, and health checks without touching component code.
Flexible Routing: Adding a new component or channel doesn’t require new WebSocket infrastructure—just register a new channel ID.
Trade-offs
Shared Fate: If the StreamConnection dies, all components using it are affected. With individual connections, failures were isolated.
Channel Naming: Components need to coordinate channel IDs to avoid collisions. We use a convention: {component-type}-{instance-id}.
Debugging Complexity: With individual connections, you could see exactly which component was sending what in browser DevTools. Now all messages go through one connection, requiring better logging.
Results and Benefits
After implementing connection pooling, we measured the impact:
| Metric | Before | After | Improvement |
|---|---|---|---|
| Connections per user | 9 | 2 | 78% fewer |
| Total with 100 users | 900 | 200 | 78% fewer |
| Initial connection time | ~400ms | ~150ms | 62% faster |
| Memory per user (estimated) | ~5MB | ~1.5MB | 70% less |
The server was noticeably less loaded, and page initialization felt snappier.
Key Learnings
Start with Simplicity: Our first version had individual connections because it was easier to implement per-component. When we hit scale issues, pooling became necessary.
EventTarget is Powerful: Using the browser’s native EventTarget API for pub/sub saved us from bringing in a dependency or rolling our own event system.
Binary Data Needs Planning: The audio streaming use case required careful thought about header formats and buffer handling. JSON-only would have been simpler.
Component Isolation Matters: We kept components unaware of the connection pooling implementation. They just receive a connection manager instance and use it like they would a direct connection.
Server-Side Routing is Simple: Adding channel-based routing on the server was straightforward—just parse the channel field and dispatch to the right handler.
This refactoring shows how sometimes you need to scale before optimizing. If we’d built pooling from day one, it would have been over-engineering. But once we had multiple components creating connections, the optimization became obvious and valuable.
Working through connection pooling with Caroline taught us an important lesson: premature optimization can add complexity, but addressing real performance problems with well-designed abstractions simplifies the system overall. The connection managers added a layer of indirection, but they eliminated repetitive connection management code across components.
The pattern also scales nicely—when we add video streaming or additional biometric sensors, they’ll just register new channels with the existing connections rather than creating more network overhead.