Tech USP: Every change to an entity is an event on an aggregate stream. The audit trail isn’t something you build — it is the framework.
Buyer’s view: Who changed the record on March 12 at 14:32? — one query. ISO 27001 audit, GDPR access requests, internal review on demand instead of in a sprint.
What this means in practice
In Kumiko, a write handler doesn’t write directly to a table — it emits an event:
r.writeHandler({
name: "incident.open",
schema: openIncidentSchema,
handler: async (event, ctx) => {
await ctx.appendEvent("incident-opened", {
incidentId,
title: event.title,
severity: event.severity,
});
},
});
The event lands in the events table (Postgres, no Kafka). The read-model table (incidents) is built from it via projection — synchronously, in the same transaction. Read-after-write “just works”.
What you get for free
”Who changed what when” as a framework primitive
yarn kumiko inspect incident <id>
Returns the full event history for the entity. No separate audit table, no Postgres triggers, no drift risk.
”What did it look like at 14:32 on March 12?”
const oldState = await ctx.loadAggregate("incident", id, {
asOf: Temporal.Instant.from("2026-03-12T14:32:00Z"),
});
Time travel as a framework primitive. Compliance auditors love it.
Read models are rebuildable
Schema migration of a read-model table = drop projection, rebuild from the event log, upcast old events. Schema bug that ate data? Drop and rebuild. The original data was never gone.
GDPR erasure without audit gaps
Crypto-shredding: PII fields encrypted with user-specific keys. “Right to be forgotten” = delete the key. The aggregate stays readable for compliance audits (event structure), PII is mathematically illegible.
What it isn’t
- No separate event-bus infrastructure — Postgres is the event store. No Kafka, no NATS, no EventStoreDB
- No DDD vocabulary requirement — you can learn Kumiko without a single “aggregate” or “bounded context”. Events live under the hood, the author layer is
r.writeHandler({...}) - No performance killer — single-stream projections run inline (read-after-write OK). Cross-aggregate projections are explicitly async
- No uber-scale tool — Postgres + Kumiko scales for 99 % of business apps. >100k events/sec belongs in Kafka
Architecture deep dive
| Doc | Content |
|---|---|
event-sourcing-pivot | Pivot rationale, events table, asOf, aggregate streams |
event-dispatcher | Async delivery, MultiStreamProjection, dead letter, retention |
projections | Projection patterns (single-stream, cross-aggregate) |
es-gdpr-strategy | GDPR + crypto-shredding |
es-competitor-scan | Comparison Marten/Axon/EventStoreDB |
Where this lands in the pitch
- EU mid-market: Top argument — ISO 27001, GDPR, internal review. Make it concrete with a
kumiko inspectdemo - Indie hackers: Sub-argument — enterprise-deal closer (“customer asked for change-history dashboard”)