Disclaimer: This blog post is automatically generated from project documentation and technical proposals using AI assistance. The content represents our development journey and architectural decisions. Code examples are simplified illustrations and may not reflect the exact production implementation.
Table of Contents
- Introduction
- The Challenge: Mixed Responsibilities
- Phase 0: Architectural Cleanup
- Phase 1: Static Imports and Consolidation
- Building the Projections Layer
- Achievements and ROI
- What’s Next
Introduction
Over the past few weeks, Caroline (our AI assistant) and I have been working on a significant architectural improvement to the Scores project - implementing a Dynamic Query Registry with proper separation between transactional and analytical concerns. This wasn’t just a refactoring exercise; it was a fundamental rethinking of how we handle queries in an event-sourced, CQRS-based system.
The journey involved cleaning up architectural boundaries, consolidating projection infrastructure, and removing backward compatibility layers that had accumulated during rapid development. Let me walk you through what we built and why it matters.
The Challenge: Mixed Responsibilities
Event sourcing systems naturally split into two query worlds, each optimized for different workloads:
- Transactional Queries (OLTP): Read current state from PostgreSQL - optimized for low-latency, high-concurrency reads and writes of individual records
- Analytical Queries (OLAP): Read denormalized projections from ClickHouse - optimized for complex aggregations, time-series analysis, and scanning large datasets
Our initial implementation had these concerns mixed together in the Application layer. The domain package contained ClickHouse-specific projection types, and the Application class handled both command execution and all types of queries without clear boundaries. This violated the principle of separation between OLTP and OLAP workloads, made the system harder to reason about, and prevented independent scaling of analytical queries.
graph TB
subgraph "Before: Mixed Concerns"
APP[Application Class]
APP -->|Handles| CMD[Commands]
APP -->|Handles| TQ[Transactional Queries]
APP -->|Handles| PQ[Projection Queries]
PQ -->|Query| CH[(ClickHouse)]
TQ -->|Query| PG[(PostgreSQL)]
CMD -->|Write| PG
end
subgraph "After: Clear Separation"
APP2[Application]
PROJ[Projections]
APP2 -->|Handles| CMD2[Commands]
APP2 -->|Handles| TQ2[Transactional Queries]
PROJ -->|Handles| PQ2[Projection Queries]
PQ2 -->|Query| CH2[(ClickHouse)]
TQ2 -->|Query| PG2[(PostgreSQL)]
CMD2 -->|Write| PG2
end
style APP fill:#ffcccc
style APP2 fill:#ccffcc
style PROJ fill:#ccffcc
Phase 0: Architectural Cleanup
The first major step was creating a dedicated Projections class in the matches-projections package. This gave analytical queries their own home, completely independent from the transactional Application layer.
Key decisions made:
- Complete Independence:
Application.Create()handles commands + transactional queries.Projections.Create()handles analytical queries + event denormalization. - Service Flexibility: Each service instantiates only what it needs:
- Projection Consumer: Only
Projections.Create() - Realtime Service: Both
Application.Create()+Projections.Create() - API Service: Both instances
- Command Handler: Only
Application.Create()
- Projection Consumer: Only
This separation enabled us to remove the domain package dependency from projections entirely. Projection queries no longer leaked ClickHouse concerns into our core domain model.
The Projections Registry
At the heart of the new architecture is a self-describing projection registry:
export class Projections {
#registry: Map<ProjectionType, ProjectionRegistryEntry>;
register(type: ProjectionType, handler: ProjectionHandler): this {
const name = handler.name;
const version = handler.version;
const description = handler.description;
const subject = handler.subject;
this.#registry.set(type, {
type,
name,
version,
description,
subject,
handler,
});
return this;
}
async execute<TResult>(
type: ProjectionType,
params: BaseProjectionParams
): Promise<ProjectionResult<TResult>> {
// OpenTelemetry tracing wrapper
// Error handling
// Performance measurement
}
}
Each projection became self-describing through static properties:
export class PlayerMomentumProjection extends BaseProjectionHandler {
static override readonly Name = 'PlayerMomentum';
static override readonly Version = 'v1';
static override readonly Subject = 'player';
static override readonly Description =
'Tracks player momentum metrics including scoring streaks';
}
Phase 1: Static Imports and Consolidation
Dynamic imports had served us well during rapid development, but they came with costs: slower startup times, no tree-shaking, and harder-to-trace execution paths. We replaced all dynamic import() calls with static top-level imports across 13 projection handlers.
Consolidating the Base Handler
We had accumulated technical debt in the form of multiple base classes:
base.projection.mts- Basic projection interfacebase.statistics.projection.mts- CTE methods for statistics queries- Duplicate
momentum.query.mtsfiles
Caroline and I consolidated these into a single unified BaseProjectionHandler with all Common Table Expression (CTE) methods:
export abstract class BaseProjectionHandler<TParams, TResult> {
// Standard properties
static readonly Name: string;
static readonly Version: string = 'v1';
static readonly Description: string = '';
static readonly Subject: 'player' | 'match';
// CTE methods for building complex queries
protected baseShotsWithScoreCTE(params: TParams): string {}
protected outcomeSumsCTE(): string {}
protected teamAggregatesCTE(): string {}
abstract execute(params: TParams): Promise<TResult>;
protected abstract buildQuery(params: TParams): string;
protected abstract map(rows: any[]): TResult;
}
All projections now accept a standardized interface with matchId, optional playerId, and an after parameter for incremental updates. This consistency makes the API predictable and easy to use.
Building the Projections Layer
With the foundation solid, we focused on the projection infrastructure itself. The Projections class became the central orchestrator for all analytical queries:
sequenceDiagram
participant Client
participant Projections
participant Registry
participant Handler
participant ClickHouse
participant Tracer
Client->>Projections: execute(type, params)
Projections->>Registry: get(type)
Registry-->>Projections: handler entry
Projections->>Tracer: startSpan(projection.execute)
Projections->>Handler: execute(params)
Handler->>Handler: buildQuery(params)
Handler->>ClickHouse: query
ClickHouse-->>Handler: rows
Handler->>Handler: map(rows)
Handler-->>Projections: result
Projections->>Tracer: setAttributes
Projections-->>Client: ProjectionResult
OpenTelemetry Integration
Every projection execution is automatically wrapped in an OpenTelemetry span:
return await this.#tracer.startActiveSpan(
`projection.execute.${entry.name}`,
{ attributes: { 'projection.name': entry.name } },
async (span) => {
const startTime = performance.now();
const data = await entry.handler.execute(params);
const executionTime = performance.now() - startTime;
span.setAttributes({
'projection.execution_time_ms': executionTime,
'projection.subject': entry.subject,
});
return { name, version, data, executionTime, subject };
}
);
This gives us distributed tracing across service boundaries without any manual instrumentation in projection handlers.
Achievements and ROI
This architectural improvement delivered measurable benefits:
Code Quality
- Removed 300+ lines of duplicate base handler code
- Eliminated all dynamic imports (13 projections)
- Consolidated 3 base classes into 1 unified handler
- Standardized projection interfaces across all handlers
Maintainability
- Clear separation: Domain ↔ Application ↔ Projections
- Self-describing projections with metadata (name, version, description, subject)
- Type-safe projection registry with case-insensitive lookup
- Single source of truth for each projection’s capabilities
Performance
- Faster startup time (static imports enable tree-shaking)
- Built-in incremental updates via
afterparameter - OpenTelemetry spans for every projection (automatic performance monitoring)
- Independent scaling of OLAP (analytical) vs OLTP (transactional) workloads
- ClickHouse handles complex aggregations 10-100x faster than PostgreSQL
Developer Experience
- Consistent API: all projections accept standardized parameters
- Explicit error handling for partial failures
- Version tracking enables schema evolution without breaking clients
- Clear data flow from routes to UI components
What’s Next
With Phase 0 and Phase 1 complete, the foundation is solid for Phase 2: Dynamic API Generation. The metadata we’ve added to projections (name, version, description, subject) will enable:
- Automatic route generation based on projection metadata
- Discovery endpoint listing all available projections
- OAuth scope derivation from projection subjects
- HTTP caching strategy based on projection characteristics
The projection registry pattern has proven its worth. Each projection knows its own purpose, version, and scope - making the system self-documenting and easier to extend.
Working with Caroline on this refactoring was a great example of human-AI collaboration. I provided the architectural vision and domain knowledge, while Caroline helped implement consistent patterns across 13 projection handlers, caught edge cases I missed, and suggested improvements like consolidating the CTE methods into the base handler.
The result is a cleaner, more maintainable codebase that better reflects the principles of CQRS and event sourcing. Sometimes the best way forward is to step back, rethink your boundaries, and rebuild them properly.
Stef Hock, with Caroline and Claude
Key Takeaways:
- Separate transactional and analytical concerns into distinct layers
- Make components self-describing through metadata properties
- Use static imports for better tree-shaking and startup performance
- Built-in observability (tracing, versioning) pays dividends
- Human-AI collaboration accelerates large-scale refactoring
- Consolidate shared infrastructure to reduce duplication