The Engagement Model

Phase 1

Diagnostic (Week 1)

Before I commit to anything, I need to understand what's actually happening. The first week is spent reading code, talking to engineers, mapping the delivery pipeline, and identifying the real bottlenecks. Root causes often differ from the initial assessment. That's why the diagnostic week matters.

At the end of the week, you get an honest assessment: what I found, what I recommend, and whether I'm the right fit for the engagement. Sometimes the answer is no, and I'll tell you that up front.

Phase 2

Embedded Delivery (4 weeks – 2 years)

I join your team as a working engineer. Daily standups. PR reviews. Architecture sessions. Pair programming when it's useful. I make decisions, write code, and establish the patterns your team will build on after I'm gone.

For AI architecture engagements, this phase includes model evaluation and selection, data pipeline design and implementation, governance framework setup, and documentation with EU AI Act requirements in mind. I write the code, not just the recommendations. By mid-engagement your team is reviewing AI-related PRs with the same confidence they bring to any other part of the codebase.

My time goes to the work that moves the needle: code, architecture decisions, and direct collaboration with engineers. Less time in meetings, more time shipping.

Engagement length depends on the scope. Some run a few months, others extend to a year or more when the problem is genuinely complex. My longest engagements have run up to three years. We'll know the likely timeline by the end of the diagnostic.

Phase 3

Handoff & Exit (Final 2 weeks)

The exit is planned from the start. During the final phase, I shift from doing to teaching. Documenting decisions, pairing with the person who'll own the architecture after me, and making sure the patterns I've established are understood, not just implemented.

The test is simple: can the team continue at the same velocity without me? If yes, the engagement was successful. If no, I haven't done my job.

What You Get

  • An honest diagnostic that identifies root causes, even when they're not what you expected to hear
  • AI architectural clarity: which models to use and why, how your data pipeline is structured, where the governance checkpoints are, and where EU AI Act requirements are likely to apply. Documented, not just noted.
  • Architectural decisions informed by 15+ years of seeing what works and what fails at scale
  • Patterns and processes that outlast the engagement
  • A team that's stronger, more confident, and fully equipped to continue without me
  • Knowledge transfer that sticks. I train the people I work with so they own the architecture, not just inherit it
  • A clean exit with no dependency on me

What I Don't Do

  • Slide decks and steering committees
  • Staff augmentation or body-shopping
  • Open-ended retainers with no exit plan
  • Technology recommendations without implementation
  • Engagements where I can't write code

Typical Working Cadence

Cadence What it looks like
Daily Standup with the team. Code review. Hands-on work.
Weekly Architecture session, decisions documented as ADRs. Progress check with the engagement sponsor.
Bi-weekly Broader update with leadership if needed. Honest assessment of where we are vs. where we need to be.
End of engagement Written handoff document. Knowledge transfer sessions. Clear ownership map.

Collaboration Principles

I have opinions. I hold them loosely.

I'll tell you what I think the right call is and why. I'm open to being wrong. If you have a better argument, I want to hear it. What I won't do is hedge. "It depends" is only useful if I explain what it depends on.

I optimise for the team that stays, not the one I'm on.

Every decision is made with the question: "Will this make sense when I'm gone?" If a pattern requires my presence to work, it's the wrong pattern.

I'm direct about problems.

If something is broken, I'll say it's broken. If a decision was wrong, I'll say that too, including my own. Polite disagreement is fine. Silence when something's going wrong is not.

I treat AI as infrastructure, not magic.

Treat a model like you'd treat any other external dependency: it has failure modes, it needs monitoring, and it will surprise you at the worst moment. My job is to design systems where that surprise is survivable. That requires the same rigour you'd apply to a database schema, not the same optimism you'd apply to a new framework.

Working Style

Pragmatic, not dogmatic

I care about what works in your context, not what's trendy. If a "boring" technology is the right fit, that's what I'll recommend.

Collaborative by default

I ask a lot of questions. I assume your team knows things I don't. The best solutions come from combining what I've seen before with what you know about your business.

Direct about problems

I'll tell you when something's off, but I'm not interested in being right. I'm interested in getting it right.

Common Questions

What does the first week look like?

The first week is a diagnostic. I read the code, talk to engineers, map the delivery pipeline, and identify the real bottlenecks. At the end of the week, you get an honest assessment: what I found, what I recommend, and whether I'm the right fit for the engagement.

How do you integrate with an existing team?

I join as a working member. Daily standups, PR reviews, architecture sessions, pair programming when it's useful. I make decisions with skin in the game, not from the sidelines. Most engagements integrate smoothly. The collaboration typically feels natural within the first week.

What happens after you leave?

The exit is planned from the start. I document decisions, pair with the person who'll own the architecture after me, and ensure the patterns I've established are understood, not just implemented. The test is simple: can the team continue at the same velocity without me?

What if we need you longer?

Extensions happen when the scope genuinely requires it. But I'll be honest about whether extending is solving a real problem or creating a dependency. The goal is always a self-sustaining team, not a permanent consultant.

Typical Engagements

1 week

Diagnostic

A written assessment of where you are, what's broken, and what to do next. Every engagement starts here.

4–12 weeks

Embedded Sprint

Integrated with your engineering team. Architecture decisions, code, AI pipeline design. For teams with a defined problem and a clear scope.

6–12 months

Strategic Build

Full architecture and delivery of a production AI system. From first use case to operational handoff, with governance built in from day one.

All engagements begin with the paid diagnostic week. Day rates are discussed in the first conversation.

Interested?

The first step is a conversation. Usually 30 minutes, no prep required on your side. Tell me what's going on and I'll tell you whether I can help.