Disclaimer: This blog post is automatically generated from project documentation and technical proposals using AI assistance. The content represents our development journey and architectural decisions. Code examples are simplified illustrations and may not reflect the exact production implementation.
Table of Contents
- The Challenge: When Metrics Evolve
- The Why: Statistical Integrity Over Time
- The What: Snapshot-Based Versioning
- The How: Implementation Journey
- Collaboration in Action
- The Results: What We Achieved
- Looking Forward
The Challenge: When Metrics Evolve
Imagine this scenario: You’ve carefully tuned your player statistics algorithms, assigning specific weights to different shot types—smashes get higher offensive weights, delicate chiquitas favor precision metrics. Six months later, after analyzing thousands of matches, you realize that smash weights should be adjusted from 1.0 to 0.95 to better reflect actual game dynamics.
You make the change. The new matches look great. But then you notice something troubling: when you recalculate statistics for historical matches, the numbers have changed. Players who were leading certain categories are now in different positions. The problem? You’ve just invalidated your entire statistical history.
This is the challenge we faced with the Scores project—and why we implemented shot weight snapshot tracking.
The Why: Statistical Integrity Over Time
Our core values of Care and Continuous Improvement collided head-on with this challenge. We care deeply about the accuracy of our statistics, but we also need the freedom to continuously improve our algorithms without breaking historical data.
The fundamental problem was simple: shot weights are critical domain knowledge, but we were treating them as mutable configuration. Every time we updated weights, we lost the ability to reproduce historical calculations.
The Real-World Impact
Consider a player profile showing:
- Offensive Power: 85 (calculated in June with weight 1.0)
- Consistency: 72 (calculated in June with weight 1.0)
After updating the weight to 0.95:
- Offensive Power: 81 (recalculated with new weight 0.95)
- Consistency: 72 (unchanged)
The player’s profile has changed retroactively, through no fault of their own. For competitive analysis, historical comparisons, or long-term tracking, this is unacceptable.
The What: Snapshot-Based Versioning
Our solution embraces the immutability principle from event sourcing: instead of updating weights, we create new snapshots. Each snapshot represents a complete set of shot weights at a specific point in time.
graph TB
A[Match Created] --> B{Query Latest<br/>Published Snapshot}
B --> C[Assign Snapshot ID<br/>to Match]
C --> D[Match Events Include<br/>Snapshot Reference]
D --> E[Events Flow to<br/>ClickHouse]
E --> F[Projections Use<br/>Snapshot-Specific Weights]
G[Admin Creates<br/>Draft Snapshot] --> H[Test & Validate<br/>New Weights]
H --> I[Publish Snapshot]
I --> J[New Matches Use<br/>New Snapshot]
style C fill:#a8dadc
style F fill:#a8dadc
style I fill:#457b9d
Key Design Principles
- Immutability: Published snapshots cannot be modified or deleted
- Explicit References: Every match knows which snapshot it uses
- Draft Workflow: Test weight changes before publishing
- Event-Driven Sync: Changes propagate through domain events
- Query-Time Selection: Projections fetch the correct weights automatically
The How: Implementation Journey
Database Migrations
We started with the foundation—database schema changes across both PostgreSQL (write side) and ClickHouse (read side).
PostgreSQL Schema:
-- Create snapshots table with draft/published support
CREATE TABLE shot_weight_snapshots (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name VARCHAR(255) NOT NULL,
description TEXT,
draft BOOLEAN NOT NULL DEFAULT TRUE,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
modified_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Add snapshot reference to matches
ALTER TABLE matches
ADD COLUMN shot_weight_snapshot_id UUID
NOT NULL DEFAULT '00000000-0000-0000-0000-000000000000'
REFERENCES shot_weight_snapshots(id);
ClickHouse Denormalization:
-- Add snapshot_id to shots table
ALTER TABLE shots
ADD COLUMN shot_weight_snapshot_id String
DEFAULT '00000000-0000-0000-0000-000000000000';
-- Add to match_timing table
ALTER TABLE match_timing
ADD COLUMN shot_weight_snapshot_id String
DEFAULT '00000000-0000-0000-0000-000000000000';
The default UUID (00000000-0000-0000-0000-000000000000) represents our initial baseline weights, ensuring all historical matches have a valid reference.
Domain Model Evolution
We extended the match aggregate to track snapshot IDs throughout the entire lifecycle:
// Match state now includes snapshot tracking
export interface MatchState {
id: string;
createdAt: string;
updatedAt: string;
shotWeightSnapshotId: string; // New field
totalShotsPlayed: number;
startedAt: string | null;
finishedAt: string | null;
}
// Match creation queries for latest published snapshot
export class MatchAggregate {
static Create(
id: string,
shotWeightSnapshotId = DEFAULT_SHOT_WEIGHT_SNAPSHOT_ID
): MatchAggregate {
// Creates match with specific snapshot
}
}
Domain Events Include Snapshot:
export interface MatchCreatedEvent {
type: 'MatchCreated';
matchId: string;
shotWeightSnapshotId: string;
occurredAt: string;
}
export interface ShotPlayedEvent {
type: 'ShotPlayed';
matchId: string;
shotWeightSnapshotId: string;
shotType: string;
outcome: string;
// ... other fields
}
Match Lifecycle Integration
The most critical piece was ensuring new matches automatically use the latest published snapshot:
// CreateMatch command handler queries for latest
async execute(command: CreateMatchCommand): Promise<void> {
const query: GetLatestPublishedShotWeightSnapshotQuery = {
type: QueryTypes.GetLatestPublishedShotWeightSnapshot,
params: {},
};
const snapshot = await this.app.executeQuery(query);
if (!snapshot) {
throw new Error(
'No shot weight snapshot available. ' +
'Cannot create match without a valid snapshot.'
);
}
const match = MatchAggregate.Create(
command.details.id,
snapshot.id
);
await this.repository.save(match);
}
This fail-fast approach ensures we never create a match without knowing which weights to use.
Projection Layer Updates
Projections needed to return the snapshot ID alongside their data, giving consumers full transparency about which weights were used:
export interface ProjectionResult<T> {
name: string;
version: string;
data: T;
executionTimeMs: string;
shotWeightSnapshotId: string; // New field
}
// Base projection handler fetches snapshot ID
protected async getSnapshotId(
params: BaseProjectionParams
): Promise<string> {
const query = `
SELECT COALESCE(
(SELECT shot_weight_snapshot_id
FROM match_timing
WHERE match_id = {matchId: String} LIMIT 1),
(SELECT shot_weight_snapshot_id
FROM shots
WHERE match_id = {matchId: String} LIMIT 1),
'00000000-0000-0000-0000-000000000000'
) as snapshot_id`;
const result = await this.clickhouse.query(query);
return result.snapshot_id;
}
API Response Example:
{
"PlayerProfiles": {
"data": {
"offensive_power": 85,
"consistency": 72
},
"executionTimeMs": "142.50",
"version": "v1",
"shotWeightSnapshotId": "a1b2c3d4-..."
}
}
API Enhancements
We also updated HTTP status codes to better reflect the async nature of command processing:
- POST
/shot_weights/snapshots→ Returns 201 CREATED - PUT
/shot_weights/snapshots/:id→ Returns 202 ACCEPTED - PATCH
/shot_weights/snapshots/:id→ Returns 202 ACCEPTED - POST
/shot_weights/snapshots/:id/publish→ Returns 202 ACCEPTED
The 202 status code properly indicates that the command has been accepted for processing, acknowledging the event-driven architecture.
Collaboration in Action
This implementation showcased the power of Collaboration between Caroline (AI) and Stef (human):
Caroline’s Strengths
- Pattern Recognition: Identified similar snapshot patterns in other domains
- Code Generation: Rapidly generated boilerplate for migrations and domain events
- Consistency Checking: Caught places where snapshot ID needed to propagate
- Documentation: Synthesized implementation into coherent documentation
Stef’s Strengths
- Domain Expertise: Understood the business implications of weight changes
- Architectural Decisions: Chose immutability over versioned updates
- Edge Case Identification: “What happens if no snapshot exists?”
- Testing Strategy: Designed test cases covering snapshot lifecycle
The Creative Process
The Creativity came from challenging assumptions:
- “Why store weights in the domain if ClickHouse has them?”
- “What if we make snapshots immutable after publishing?”
- “Should projections automatically fetch their snapshot ID?”
Each question led to a design decision that made the system more robust.
The Results: What We Achieved
✅ Statistical Consistency
Matches are forever tied to the weights used when they were played. Historical statistics remain unchanged even as we improve our algorithms.
✅ Transparency
API responses include shotWeightSnapshotId, giving consumers full visibility into which weight configuration was used.
✅ Safe Experimentation
Draft snapshots allow testing new weights without affecting production matches.
✅ Comprehensive Testing
58 tests passing, including 5 new tests specifically for snapshot lifecycle scenarios.
✅ Type Safety Improvements
Refactored repository to use Pick<MatchState> utility types instead of inline object literals, improving maintainability.
✅ SQL Optimization
Moved default fallback logic from JavaScript to SQL using COALESCE, reducing code complexity.
Looking Forward
This implementation sets the foundation for several future enhancements:
Snapshot Comparison API: Compare how different weight configurations affect the same match data
A/B Testing: Run parallel calculations with different snapshots to validate weight changes
Machine Learning Integration: Use historical snapshot data to train models for optimal weight selection
Audit Trail: Track who created snapshots, when they were published, and why weights changed
The journey from identifying the problem to implementing the solution demonstrates our commitment to Continuous Improvement. We didn’t just patch the issue—we redesigned the system to handle weight evolution gracefully, maintaining statistical integrity while enabling ongoing refinement of our metrics.
What architectural challenges have you faced when dealing with evolving business rules? How do you balance immutability with the need for change? Let’s discuss in the comments below!