Your COO already knows why your AI isn't working

There's a number that keeps surfacing in conversations with operations leaders right now, and it tends to produce a reaction somewhere between relief and frustration.

Ask a CIO or CTO whether their workforce is fully ready to adopt AI and 39% will say yes. Ask the COO sitting next to them and only 7% agree. Same organisation, same AI investments, radically different readings of what's actually happening on the ground.

A survey of C-suite leaders by US advisory firm Grant Thornton, examining why AI isn't delivering the performance organisations expected, puts the gap at five times. 

The COO is right.

What the COO sees that the CIO doesn't

Operations leaders are close to the work in a way that technology leaders rarely are. 

They know which processes run because one person has been doing them for eight years and holds all the judgment in their head. 

They know which handoffs are undocumented, which workarounds are load-bearing, and which workflows have never been mapped because nobody ever needed to map them before.

That knowledge, the judgment calls, the informal rules, the expertise that doesn't appear in any system of record, is precisely what AI needs to function well. 

And it's precisely what most AI programmes are being built on top of, without first understanding whether it exists in a usable form.

The CIO sees the tools. The COO sees what the tools have to work with.

When those two views diverge by a factor of five, the gap isn't about pessimism or resistance to change. It's about proximity to operational reality.

Why AI keeps stalling at the operational layer

The Grant Thornton data points to a structural problem: the executives most accountable for whether AI performs in operations are often the least involved in shaping what it gets built to do. 

Technology functions design the deployment. Operations functions inherit the results.

That ordering has consequences. 

AI systems don't invent business logic. They learn it from your workflows, your documented processes, your data. Where that foundation is thin, AI has nothing reliable to learn from.

The result is a system that automates a partial, increasingly outdated representation of how work actually happens.

Research from Info-Tech Research Group warned recently that leaders risk losing critical knowledge as workforce changes and AI adoption accelerate. That’s because critical knowledge is fragmented across individuals and systems, making it difficult to access, reuse or transfer when it matters. The problem predates AI adoption — but AI adoption makes it more expensive.

What does "AI-ready" actually mean for operations?

Most readiness assessments focus on technology infrastructure: is your data clean, are your systems integrated, have your teams completed the training modules? These matter. But they describe the container, not the contents.

Operational AI readiness requires something more specific. It requires knowing whether the knowledge your AI will run on has been captured, structured and verified.

That’s the question the COO is actually answering when they say the organisation isn't ready. 

It isn't scepticism about AI. It's an accurate read of what's missing underneath it.

Why does this gap keep widening?

Because the people who understand the operational knowledge problem aren't consistently in the room when AI strategy is built, and the people who are in the room are measuring readiness in ways that don't surface it.

Training completion rates. Tool adoption metrics. Pilot outputs. These are meaningful signals, but they don't tell you whether the operational logic being automated is complete, accurate or transferable when the person who originally designed it is no longer there.

Half of operations leaders in the Grant Thornton survey said they need a formal AI strategy or governance plan within the next six months to improve performance. 

That urgency is real. But a governance plan built on an undocumented operational foundation will hit the same wall faster, with more momentum behind it.

The question to ask before the next AI investment

Before committing further budget to automation or AI deployment, one question is worth asking directly: do we actually understand how the work we're trying to automate gets done?

Not how it's supposed to get done. How it actually gets done — including the judgment calls, the exceptions, the workarounds and the decisions that have never been written down.

If the honest answer is anything other than a confident yes, the investment case deserves a harder look.

Your COO already knows this. The gap in the data suggests the rest of the C-suite is catching up.

Next
Next

Your AI investment is only as good as the knowledge underneath it