Perspectives on technology strategy, delivery, and the decisions that shape how organisations build.
Most engineering teams are asking how to make AI fit how they work. That is the wrong question — and it explains why the numbers keep getting worse.
AI has changed the economics of validation. It has not changed the underlying logic of why starting small produces better outcomes. It has simply removed the last credible excuse for not doing it.
Most AI projects underinvest in data infrastructure and overinvest in models. The model is never the limiting factor.
The most revealing things about a software team are not in the codebase. They are in how the team talks about its work.
Before you can fix anything, you have to understand why it is broken. And why it is broken is almost always structural, not technical.
The visible cost of a CTO departure is a recruitment process. The invisible cost is what happens to the organisation while the seat is empty.
When a client goes quiet mid-engagement, the instinct is to assume they are busy. They are usually not busy. They have lost confidence.
The cost of non-compliance gets discussed at length. The cost of compliance done badly does not. They are often comparable.
Every importer approaching CBAM as a regulatory exercise is setting up for a harder time than necessary. The organisations that will manage it well are treating it as a data infrastructure problem.
Engineer departures are almost always telling you something worth understanding. The standard response is designed to fill a seat, not to read the signal.
Every team using AI in development thinks they are ahead of where they are. Here is a framework for knowing where you actually stand — and what it costs to mistake one level for another.
Most technology assessments end with a document. A good technology assessment ends with a decision.