// healthcare
Healthcare platform modernization
Anonymised under NDA
65 calendar days · 19 active coding days
AI-driven commits
119
LOC delivered
91,500
vs. legacy baseline
42% (220K → 91K)
API parity
218 / 218
Mutation parity
109 / 109 · 0 DB diff
// the case
Replacing a legacy core in 19 active coding days.
A regulated healthcare platform — a 220,000-line legacy core, frozen as the parity baseline. The mandate: rebuild it as a modern microservices stack with measurable behavioural parity, audit-ready from day one.
Single primary engineer at the keyboard. Single long-lived branch. Two and a half months of calendar time. The work landed in 19 active coding days — the rest was planning, client alignment, and parity-test design.
// what we delivered
What we delivered.
Eleven domain modules (~370 operations) extracted from the monolith into a modular service architecture. A shared kernel, shared infrastructure, an integration gateway with six external-system adapters, message-bus messaging with a SQL outbox, real-time push with a cache backplane, OpenTelemetry, health checks, OpenAPI documentation, and per-module architecture tests.
Final codebase: 91,500 lines across 1,000+ files. The same business behaviour as the legacy baseline at 42% of the line count — modern patterns, query deduplication, and removal of dead code more than half the legacy total.
// how we did it
AI-first, in practice.
Human as operator only. Coding by AI, end-to-end, against a layered context — auto-loaded rules, session-bootstrap prompts, methodology recipes, and step files. Every change a pull request; every PR an artefact in the audit trail.
Construction wasn't a gentle ramp. The core sprint landed nearly a hundred commits in seven days — the eleven modules, the gateway, all cross-cutting infrastructure. After that, weeks of parity-test debugging at two to three commits a day. Two distinct work modes, both visible in the git history.
// proof of correctness
100% behavioural parity, including database state.
Read-side parity: 218 corpus tests, 218 passing, zero failures, achieved iteratively as the test corpus grew from 69 to 227 entries over twelve days. Each expansion of the corpus revealed new mismatches; each round closed them.
Mutation parity: 109 mutation tests, 109 passing — zero HTTP response differences and zero database-state differences between old and new platforms, including outbox tables and domain-event records. The skipped tests are confirmed bugs in the legacy system that the new platform deliberately doesn't reproduce.
// where this fits
Where this fits.
This is what AI-assisted modernization looks like when the methodology is in place: measurable behavioural parity at every step, audit-ready PRs, a single engineer covering ground that traditional shops staff with a team of five.
The reason it works isn't the AI tooling — it's the context discipline around it. The methodology travels; the same approach has been applied to core-banking modernization across three continents.
// stack
Tools.
- C# / .NET
- Message bus + SQL outbox
- Redis (cache + real-time backplane)
- OpenTelemetry
- OpenAPI / Scalar
- Python (parity test harness)
- Per-module architecture tests