The AI Maturity Gap: Why Most Mid-Market Companies Are Further Behind Than They Think

UX maturity taught us that capability beats tools. AI maturity is the same lesson, arriving faster and with much higher stakes.

Share

I come from product design. Before diving deep into AI and what it means for organisations, I spent years working on digital products, leading product and product design teams, and helping companies understand whether their approach to design was actually working — or just looking like it was.

In the design world, we've had the concept of UX maturity for a long time. The idea that an organisation's ability to do good design isn't just about hiring talented designers — it's about whether design is embedded in how decisions get made, how research feeds into product direction, and whether there's a real feedback loop between what gets built and what users actually need.

Most companies thought they had good UX. Most were at Level 1 or 2 on a five-point scale and didn't know it.

Sound familiar? It should. Because the same dynamic is now playing out with AI — at twice the speed and with significantly higher stakes. And if you sit on a board, manage a portfolio, or run a company that's just been through an acquisition, this gap is quietly showing up in your numbers whether you've named it yet or not.

The parallel is almost exact

When UX maturity frameworks first emerged — Nielsen Norman Group published years ago — they identified a consistent pattern. Companies at the low end weren't bad companies. They weren't even necessarily bad at design. They just didn't have the organisational capability to do design deliberately and at scale. Decisions were made by instinct. Research happened sporadically. There was no system connecting design work to business outcomes.

The organisations that invested in building real UX maturity didn't just make better products. They moved faster, wasted less, and made better decisions under pressure. The maturity wasn't about design tools or design headcount. It was about capability.

AI maturity is the same concept, a decade later, in a much bigger arena.

The companies that will capture disproportionate value from AI over the next five years are not the ones with the most tools or the most ambitious strategies. They're the ones that have built the organisational capability to deploy, measure, and scale AI initiatives deliberately.

For anyone with a stake in a company's performance — whether as an owner, operator, or investor — this capability gap is now a valuation question, not just a technology question.

Most mid-market companies are not those companies yet. And the gap between where they think they are and where they actually are is larger than almost any leadership team realises.

What AI maturity actually means

Let's be precise. AI maturity is not about whether your company uses ChatGPT. It's not about having an AI strategy document signed off by the board. And it's definitely not about the number of AI pilots running across departments.

AI maturity is about the measurable, compounding impact of AI on your organisation's cost base, revenue, and ability to move fast.

Everything else is noise.

A company can have 40 AI experiments running simultaneously and have near-zero AI maturity — if none of those experiments have moved beyond proof-of-concept, if there's no coherent infrastructure underneath them, and if the business can't tell you what they've collectively cost or returned.

In UX terms: you can have a team of brilliant designers producing beautiful work that never ships, never gets tested, and never influences a roadmap decision. That's not UX maturity. It's UX theatre. The AI equivalent is everywhere right now.

The four dimensions that actually matter

When I assess AI maturity inside an organisation, I look at four dimensions. The gaps between them are usually where the real story sits.

Usage. What AI tools are actually being used, by whom, and how often? Not what's been licensed — what's being used. Token spend across OpenAI, Claude, and other platforms tells a more honest story than any adoption survey. Shadow AI usage — tools departments have adopted without IT or management awareness — is almost always higher than leadership expects. Just like shadow IT was a decade ago.

Spend. What is the organisation actually spending on AI and automation, and how is that spend developing over time? License fees, implementation costs, token consumption, the engineering hours absorbed by AI initiatives. Most mid-market companies can't answer this question accurately. That's a problem — not because the spend is necessarily wrong, but because you can't optimise what you can't see.

Infrastructure. What is the technical foundation underneath the AI activity? Connected APIs, data pipelines, model integrations, security and compliance frameworks. This is where the gap between "we use AI tools" and "we have an AI-capable organisation" becomes visible. In UX, the equivalent was the gap between "we do usability testing" and "we have a research system that feeds into every product decision." One is an activity. The other is a capability. The difference matters enormously when you're trying to scale.

Value. The hardest and most important dimension. What has the AI investment actually returned? Cost reduction, time saved, revenue influenced, innovation cycles shortened. The organisations that can answer this question with real numbers — even rough ones — are the ones building genuine maturity. The ones that can't are, at best, running expensive experiments. ¹

Why the gap is larger than most think

Three patterns show up reliably when I look honestly at AI adoption inside mid-market organisations.

The pilot graveyard. AI experiments get launched, run for a quarter, produce interesting results, and then quietly die because nobody owns the path from pilot to production. The organisation keeps starting new things instead of scaling what worked. I saw the same thing in UX — endless discovery projects that never connected to delivery. The investment was real. The learning was real. The compounding return never materialised.

The tool sprawl problem. Departments buy AI tools independently. Finance has one set, marketing has another, operations a third. Nobody has a coherent view of what's running, what it costs in aggregate, or whether any of it integrates. This is the AI equivalent of the SaaS sprawl that has cost mid-market companies billions in wasted license fees over the past decade — arriving faster and with more complexity underneath it. In a PE context, this sprawl is often invisible at acquisition — and shows up as a surprise six months into the hold.

The measurement vacuum. Organisations invest in AI without establishing what success looks like before the initiative starts. Six months later, nobody can say whether it worked. The investment gets renewed anyway because stopping feels like admitting failure. In product design, we called this the vanity metrics trap — tracking what's easy to measure rather than what actually matters. Same trap, different domain.

The maturity levels — a rough map

Not to label or rank, but to give a useful orientation for where you are and what the next step looks like.

Level 1 — Experimental. AI tools are in use across some teams. No central visibility on usage or spend. No measurement framework. Individual champions driving adoption without organisational backing. This is where most mid-market companies are today, whether they know it or not. In UX terms: a few passionate designers doing good work with no seat at the table.

Level 2 — Structured. AI usage is visible and tracked. Spend is consolidated and governed. There are clear owners for AI initiatives and a basic framework for measuring return. Pilots have a defined path to production. This is where the gap between early adopters and the rest starts to become commercially significant. ²

Level 3 — Compounding. AI is embedded in core operations. Infrastructure exists to deploy and scale new initiatives consistently. The organisation is generating real, measurable return — cost reduction, faster innovation cycles, competitive differentiation. As I wrote in The Model is the Product, the companies operating here aren't selling features anymore — they're delivering outcomes. A small number of mid-market companies are here. Most are not. ³

The difference between Level 1 and Level 3 is not the quality of the AI tools. It's the organisational capability to use them deliberately.

We learned this lesson with UX over fifteen years. The companies that built real design capability didn't just make nicer products — they outpaced competitors who were still treating design as decoration. The same dynamic is unfolding with AI right now, compressed into a much shorter timeframe. And as I explored in Why You Need to Pay Attention to OOMs, the underlying capability of these models is compounding fast — the organisations not building maturity now are not standing still, they're falling behind. ⁴

What to do with this

The first step is always an honest assessment. Not a vendor's maturity model designed to sell you their platform. Not a survey completed by the people most invested in looking good. A real, ground-level view of what's actually running, what it costs, and what it's returned.

That assessment tends to surface three things reliably: initiatives worth scaling that have been left to stagnate, spend that can be consolidated or cut, and structural gaps — in data, infrastructure, or governance — that are quietly limiting everything else.

The organisations that are pulling ahead on AI right now are not necessarily the ones with the biggest budgets or the most ambitious strategies. They're the ones that looked honestly at where they actually were, closed the gap between perception and reality, and then moved with focus.

That's the work. And it's not as complicated as the vendor ecosystem wants you to believe.


Philipp Kanape writes about AI, digital transformation, and building organisations that execute.


References

¹ Enterprise spending on cloud infrastructure services reached $330 billion worldwide in 2024 — a $60 billion jump year-on-year, driven largely by AI workloads. Top cloud providers are projected to spend $392 billion on data centers and related infrastructure in 2025, a 38% increase. Synergy Research Group / Morgan Stanley via Heatmap

² By early 2025, ChatGPT had surpassed 400 million weekly active users, with approximately 92% of Fortune 500 companies using OpenAI products in some form. The adoption curve has outpaced every previous technology wave. Reuters

³ Foundation Capital research projected AI-driven services will unlock a $4.6 trillion market opportunity — captured not by the most tool-heavy organisations but by those that have built consistent deployment and measurement capability. Foundation Capital

⁴ AI-oriented data centers are expected to consume 945 TWh of electricity globally by 2030 — more than double current levels. The compute OOMs driving this demand are the same ones compressing the AI capability curve that organisations are racing to keep up with. IEA