What You Get from a Legacy System Modernization Assessment
What do we receive in 30 days?
A 30-day Legacy System Modernization Assessment delivers four concrete artifacts: executable specifications for your current system, a behavioral map separating used code from dead code, a prioritized modernization roadmap, and a target architecture based on AI-native services. You leave the assessment with a decision-ready plan — not a sales deck — and every artifact is yours to keep. No commitment to a multi-year rewrite is required to get the output.
This is designed as a standalone deliverable. If you proceed with incremental replacement afterward, the assessment becomes its first input. If you don’t, the artifacts still have value on their own.
What exactly is delivered?
Four artifacts, all portable out of the engagement:
- Executable specifications — human-readable descriptions of what your system actually does today, tied back to the code that implements each behavior.
- Behavioral map — production log analysis separating happy paths (what runs) from ghost paths (what doesn’t).
- Modernization roadmap — a prioritized sequence of features to replace, with estimated effort, dependencies, and risk per feature.
- AI-native replacement architecture — a target-state design showing the API façade, service boundaries, and data flows for the rebuilt system.
Each artifact is delivered in a format your team can keep using after the assessment ends — not a PDF that goes stale.
What does the 30 days look like?
Roughly one week per major artifact, with overlap:
- Week 1 — Code ingestion. Repository access, dependency mapping, first-pass spec extraction using Claude Code.
- Week 2 — Log analysis. Production logs analyzed to distinguish real traffic patterns from theoretical code coverage.
- Week 3 — Roadmap construction. Features prioritized by frequency, risk, and replacement effort. Ghost paths flagged for retirement rather than rebuild.
- Week 4 — Architecture and review. Target architecture finalized; executive review and handoff.
Your team is in the loop throughout. This isn’t a black-box engagement where you hand over a repo and wait.
Who runs the assessment?
An AI-native mini-pod, not a large consulting team:
- Principal Architect — owns strategy, strangler design, and integrity of the modernization plan
- AI Specialist — runs the spec-from-code and behavior-analysis workflows
- AI agents (Claude Code) — do the heavy-lift reading and analysis across the codebase
Small team, high leverage. AI agents execute; humans control intent, architecture, and quality.
What does the roadmap actually contain?
The roadmap is a decision document, not a Gantt chart. It includes:
- Feature-by-feature replacement order, driven by log frequency and business risk
- Effort estimates per feature, based on extracted specs rather than guesses
- Dependencies between features and external systems
- Ghost paths explicitly marked for retirement rather than replacement
- Risk callouts where behavior is ambiguous and needs human decisions before rebuild
A CTO can take this roadmap to the board without translation — which is the point.
How is this different from a typical discovery phase?
Traditional discovery produces slides. A Re-Engineer assessment produces artifacts that survive the engagement and feed directly into the rebuild.
| Dimension | Typical Discovery Phase | Re-Engineer Assessment |
|---|---|---|
| Primary output | Slide deck, written report | Executable specs, behavioral map, roadmap, architecture |
| Basis of findings | Interviews, documentation review | Direct code analysis + production log analysis |
| Reusability | Informs future RFPs | Becomes input to the rebuild itself |
| Ghost-path handling | Usually ignored | Explicitly identified and excluded |
| Vendor dependency | Often deepens it | Designed to reduce it |
| Timeframe | 4–12 weeks | 30 days |
| Commitment required | Usually ties to follow-on SOW | Standalone — artifacts keep their value either way |
What do we need to provide to start?
Minimal inputs. No reorganization, no code freeze:
- Read access to the code repository
- Access to production logs, or a representative sample covering typical traffic patterns
- A few hours of time from engineers who know the system’s history
- One executive sponsor for prioritization decisions
The assessment runs alongside normal operations.
What happens after the 30 days?
Three options, and you pick:
- Proceed with incremental replacement — the mini-pod expands and begins rebuilding features per the roadmap
- Take the artifacts and continue in-house — the specs and roadmap are yours; your team executes
- Pause — revisit when timing is right; the artifacts don’t expire
There is no hidden dependency that forces option 1. That’s deliberate.
How is this different from vendor-led modernization assessments?
Standard vendor assessments are designed to set up the next statement of work. The output is shaped to justify the follow-on engagement, which is why the deliverable is usually a slide deck rather than anything your engineers can build from.
A Re-Engineer assessment is designed to give you ownership, whether or not you continue with us. The specs, the roadmap, and the architecture are portable by design. If you hand them to another integrator — or your own team — they can execute from them.
That’s the structural difference: the assessment transfers knowledge instead of protecting it.
What does a CTO walk away with?
Thirty days from kickoff, you have:
- A clear picture of what your legacy system actually does, separated from what it could theoretically do
- A prioritized plan for replacing it without downtime
- A target architecture you can share with the board, your team, and any future integrator
- An informed answer to “rewrite or incremental replacement” — backed by data, not assumptions
That’s more progress than most modernization programs make in a year of planning.
Next step
Thirty days from today you could have executable specs, a behavioral map, a prioritized roadmap, and a target architecture — instead of another round of strategy meetings.
