I don't think of AI as good or bad. I don't evaluate it from a distance. It's how I work now. That happened gradually, over about two years, and each step changed what I spend my time on. Not the problems I solve. Not the quality I aim for. Just how I get there.
Here's how that shift actually happened.
Starting point: AI as content engine
The first time AI fundamentally changed a project for me was on a training platform for commercial truck drivers. BKF Online Schulungen provides mandatory continuing education for Berufskraftfahrer in Germany. Drivers watch training videos on safety, compliance, equipment operation. After each video, they answer comprehension questions to verify they understood the material.
The problem was scale. Every new video needed a set of questions, correct answers, wrong-answer feedback, and timestamps pointing back to the relevant section of the video. All of this in multiple languages. Creating that content manually for every video was slow and expensive. It was the bottleneck that determined how fast new courses could ship.
We connected OpenAI's APIs to the content pipeline. When a new video is uploaded, the system transcribes the audio automatically, generates comprehension questions from the transcript, produces correct and incorrect answers, attaches timestamps so that a wrong answer redirects the learner to the right section of the video, and translates the entire package into the required languages. The whole pipeline runs without manual intervention.
This wasn't a developer productivity tool. It was a product feature. AI removed the content production bottleneck, and the platform could scale its course library at a fraction of the previous cost and time. That was the first time I saw AI not as an experiment but as infrastructure that changed the economics of a product.
Coding companion: useful but limited
Around the same time, I started using GitHub Copilot on the Siemens SIMATIC AX project. The team explored it as a developer tool, trying to understand where it helped and where it got in the way.
At that stage, it was a coding companion. Autocomplete, dramatically accelerated. It could fill in boilerplate, suggest test cases, and occasionally produce a working function from a comment. Useful, but limited. The developer was still doing all the thinking. The AI just typed faster.
The real value wasn't the code it generated. It was the friction it removed from trying things. When writing a function takes thirty seconds instead of five minutes, you're more willing to experiment. You write the version you're not sure about because throwing it away costs almost nothing. That shift in willingness to iterate was more valuable than the code itself.
Thinking partner: idea to plan
The next shift was using LLMs not to write code but to think. I started running local models as a kind of adversarial interviewer. I'd describe an idea, an architecture, a product concept, and ask the model to challenge it. Where are the gaps? What assumptions am I making? What would a sceptic ask?
This was messy and conversational. Lots of back-and-forth. Lots of human intervention. The model would go down a tangent and I'd pull it back. It would miss the point and I'd reframe. But the output wasn't the model's answers. The output was my thinking, sharpened by having to defend it against a relentless (if sometimes obtuse) questioner.
What came out of those sessions were plans. Not code. Not designs. Structured plans that described what to build, why, and in what order. The thinking was mine. The process of extracting it was accelerated by the model.
Autonomous builder: plan to product
Then I handed those plans to Claude Code. And the role shifted again.
The early attempts were a learning curve. Too much guidance and the output was exactly what I described, but barely usable in practice. Too little guidance and it built something entirely different from what I intended. Finding the right level of direction was the skill I had to develop. Not prompt engineering in the superficial sense. More like briefing a new team member: enough context to make good decisions, enough freedom to solve problems I hadn't anticipated.
Adding MCP servers changed things further. Connecting Claude Code to a Storyblok CMS instance meant the agent could query content structures, understand the data model, and make informed decisions about components and layouts without me manually typing out the context. The information it needed was available directly, rather than filtered through my description of it.
And now we're at the point where I can spin up a team of agents, provide a clear briefing with intent and constraints, and let them build. Not production systems for enterprise clients. But working prototypes. Tangible things I can interact with, test against real scenarios, and use to validate whether an idea holds up.
What actually changed
My role shifted from writing code to validating outcomes. The progression looks like this:
- 2024: AI writes content, I build the platform around it
- Early 2025: AI writes code alongside me, I make all the decisions
- Mid 2025: AI challenges my thinking, I produce better plans
- Late 2025: AI builds from my plans, I validate and refine
- 2026: AI teams build from my briefings, I focus on problem definition and quality
Code became a layer I can regenerate. If a prototype doesn't work, I don't debug it. I refine the brief and let it be rebuilt. If an approach feels wrong, I explore three alternatives in the time it used to take to commit to one. The throwaway cost dropped to near zero, which means the willingness to experiment increased dramatically.
What didn't change: someone still needs to define the right problem. Someone still needs to evaluate whether the solution actually works for the people who'll use it. Someone still needs to make the architectural decisions about where AI belongs in a system and where it doesn't. The UX, the architecture, the problem domain. Those are still human work. They're the work that matters more now, not less, precisely because everything else got cheaper.
The real power isn't more output
The temptation with AI is to use it to produce more. More code, more features, more content. That path leads to noise. More stuff that nobody asked for, built faster than anyone can evaluate.
The real power is the opposite. AI lets me concentrate on the problem domain and the outcome. I can go from idea to something tangible fast enough that the idea gets tested before I've invested weeks in building it. I can explore directions that would have been too expensive to try. I can throw away work without guilt because the cost of recreating it is trivial.
This doesn't replace the diagnostic work, the architectural thinking, the cross-functional decisions that make up most of what I do for clients. It makes all of that more effective. When I walk into a new engagement and need to understand a platform, I can prototype solutions to test my understanding in hours instead of days. When I'm evaluating architecture options, I can build proof-of-concept implementations of each one instead of just reasoning about them on a whiteboard.
AI didn't change what good engineering looks like. It made everything that isn't good engineering easier to discard. And that, quietly, changed everything about how I work.