Most technology strategies are reasonable documents. They identify the right problems, propose sensible responses, and reflect a genuine understanding of what the organisation needs. The problem isn’t the strategy. It’s everything that happens between the strategy being approved and the work actually getting done.
The gap between technology strategy and delivery is one of the most consistent failure modes in large organisations. It’s also one of the least examined. When a strategy fails to land, the post-mortem usually finds fault with the strategy itself — it was too ambitious, too vague, too disconnected from the business. Sometimes that’s right. More often the strategy was fine and the delivery system wasn’t up to the task.
The planning fallacy, applied to organisations
Daniel Kahneman’s planning fallacy — the tendency to underestimate the time, cost, and risk of future actions while overestimating the benefits — applies as much to organisations as to individuals. Technology roadmaps are systematically optimistic: they assume that resources will be available when needed, that dependencies will resolve themselves, that teams will execute at peak capacity, and that the environment won’t change in ways that require adjustment.
None of those assumptions are safe, and all of them fail in predictable ways.
The capacity problem is fundamental. Most technology teams are not operating with significant spare capacity. They are managing existing systems, handling incidents, dealing with technical debt, and supporting business-as-usual requirements. The strategy assumes that these commitments can be reduced or absorbed to make room for new initiatives. In practice, the existing workload is remarkably resistant to compression. Organisations consistently underestimate how much of their delivery capacity is already committed, and overestimate how much is available for strategic work.
The dependency problem is related. Strategies are typically built as though each initiative can be sequenced cleanly, with work on one programme completing before the next begins. Real delivery doesn’t look like that. Initiatives share people, systems, and organisational attention. Delays compound. The initiative that was scheduled to complete in Q2 slips to Q4, which pushes the initiative that depended on it into the following year, which puts the initiative that depended on that one out of scope entirely. The strategy remains intact on paper while its delivery window closes in practice.
Governance that slows without guiding
There is a version of technology governance that exists to improve decisions — to bring the right information to bear, identify risks before they materialise, and ensure that commitments are made only when there is genuine confidence in delivery. Most large organisations do not have this version.
What they have instead is a review process that exists to provide assurance — to create a record that due process was followed, that the right people were consulted, and that no single decision-maker can be held individually accountable if things go wrong. This is a fundamentally different purpose, and it produces fundamentally different behaviour.
The signature of governance-as-assurance is the approval that takes longer than the work it is approving. A change request that requires three weeks of review before a two-day piece of work can begin is not managing risk; it is distributing accountability. The risk doesn’t change — it just now comes with documentation. Meanwhile, the organisation has spent three weeks not doing the work and is no better positioned to do it well.
Good governance is faster because it is more focused. It asks: what are the decisions that actually matter here, what information do we need to make them well, and who needs to be involved? It treats decision quality — not process compliance — as the measure of whether governance is working. That focus requires discipline to maintain, because the incentive to add more review steps is persistent and the cost of each individual addition looks small.
The capability problem
Technology strategies frequently assume capabilities that the organisation doesn’t yet have. This is sometimes deliberate — the strategy is designed to build those capabilities — but more often it is an oversight. The people writing the strategy know what they are trying to achieve; they are less clear on whether the organisation has the engineering talent, the operational processes, and the institutional knowledge to get there.
The build/buy/partner decision is where this becomes most consequential. An organisation that decides to build a capability in-house is committing to developing and retaining the talent to do so. An organisation that decides to buy is committing to the integration, customisation, and change management work that makes a product useful in a specific context. An organisation that decides to partner is committing to a relationship that requires ongoing management and carries its own dependencies and risks. All three options require investment in the delivery organisation, and that investment is routinely underestimated.
The pattern that causes the most damage is what might be called the thin team problem: a strategy is approved, a team is assembled that is too small for the ambition it has been given, and the team spends its time managing stakeholder expectations rather than delivering working technology. The initiatives remain on the roadmap because removing them would require an uncomfortable conversation about what the organisation is actually capable of. The team works hard and achieves little. The strategy fails at delivery, and the team is blamed for the failure.
Organisational physics
Technology strategy is always, at some level, an organisational change programme. The technology is the mechanism, not the outcome. What the organisation is actually trying to achieve — faster delivery, better reliability, more integrated operations, clearer data — requires changes to how people work, how teams are structured, how decisions are made, and where authority sits. Technology enables those changes but cannot substitute for them.
Organisational change follows its own physics. Existing structures, incentive systems, and working patterns are more stable than they appear. They persist not because people are resistant to change — most people are willing to change if they understand why and can see a credible path — but because they represent accumulated solutions to real problems. Changing them requires understanding what problems they were solving as well as what problems they are now causing.
The failure mode is treating the technology programme as complete when the systems go live. The systems are live but nothing has changed: the old processes are still running in parallel, the old incentives are still in place, and the new capability is not being used in the ways it was designed for. Sustainable change requires the organisational design work to keep pace with the technology delivery, and it requires leadership that remains engaged long enough for new patterns to take hold.
What works
The common thread in technology strategies that successfully reach delivery is constraint: fewer initiatives, better resourced, with clearer accountability and more realistic timelines.
This is harder to achieve than it sounds. Technology strategies grow because the process of building them surfaces more problems than any reasonable programme can address, and because the stakeholders who identify problems want to see them on the list. The discipline of removing things from the strategy — of explicitly deciding not to address a legitimate problem because the organisation doesn’t have the capacity to address it well — is one of the least comfortable parts of the work.
The other consistent factor is a delivery architecture that runs alongside the technical architecture: a clear model of how the work will be organised, resourced, and sequenced; where the dependencies are; what the governance process is and what it is for; and how progress will be measured. Not a project plan — plans at this level of detail are almost immediately wrong — but a model of delivery that is explicit enough to be stress-tested and honest about what it requires.
Building feedback loops between strategy and execution matters more than most organisations acknowledge. A strategy that cannot be updated in response to what delivery is learning is not a strategy — it is a set of commitments that will be met or missed but not improved. The organisations that do this well treat their technology strategy as a living document: revisited regularly, updated when circumstances change, and treated as a hypothesis about how to create value rather than a plan that needs to be defended.