Your AI investment is only as good as the knowledge underneath it

There’s a particular kind of meeting happening in operations teams right now. Someone is presenting the AI roadmap. 

There’s a slide about automation, one about efficiency gains, one about productivity uplift. And somewhere near the back, almost as an afterthought, there’s a slide about "knowledge management."

That ordering is wrong. And it’s costing organisations more than they realise.

Sitting underneath most AI programs is this: tacit knowledge, the unwritten expertise, judgment and process logic your team carries in their heads, is the substrate your AI runs on. 

Without it, you’re not automating a business. You’re automating a partial, increasingly inaccurate representation of one.

What AI needs to work

AI systems don’t invent business logic. They learn it from your data, your workflows, your decisions and your documented processes. The more complete and accurate that foundation is, the better the output.

The problem is that most of what makes your business function has never been documented. 

Studies consistently find that 80% of operational knowledge is tacit: the judgment a senior coordinator applies when a situation does not match the textbook, the shortcut a billing team member uses to resolve a claim before it gets rejected, the informal understanding between two teams about who handles what at the edge cases.

That knowledge does not appear in your system of record. It does not transfer cleanly in an onboarding document. 

And new research from Info-Tech Research Group published this week confirms what operations leaders already sense: critical knowledge is fragmented across individuals and systems, making it difficult to access, reuse or transfer when it matters most.

Feed an AI system a hollow foundation and you get a faster, more expensive version of an incomplete process.

Why does AI adoption keep stalling at the operational layer?

Because organisations are trying to automate workflows they have not yet understood.

Research covering more than 3,000 business and technology decision-makers found that 85% of organisations are piloting or using AI, but only 17% have integrated it into daily operations.  

That gap is not primarily a technology problem. It is a knowledge problem. AI is being deployed into environments where the operational logic it needs to learn from is scattered, undocumented and held by individuals who may not be there next quarter.

The Grant Thornton 2026 AI Impact Survey is blunter: CIOs and CTOs are five times more likely than COOs to say the workforce is ready for AI. That gap reflects two people looking at the same organisation and seeing completely different things. 

The COO is closer to the reality. They know which processes are actually held together by specific people, which handoffs are informal and which documentation is dangerously out of date.

AI adoption stalls at the operational layer because that layer has never been properly mapped.

How should your organisation approach this?

The answer isn’t simply more documentation. Most knowledge bases are already full of documents nobody reads, in formats nothing can use. The problem isn’t a lack of storage. It’s a lack of capture.

Effective knowledge capture does three things differently:

  • It surfaces tacit knowledge at the point of work, through structured conversation with the people who actually do it, rather than asking them to write it down retrospectively.

  • It converts what it captures into structured assets: SOPs that reflect what staff actually do, decision systems that encode experienced judgment, playbooks that a new hire can genuinely follow.

  • It maps where knowledge is concentrated, so your organisation can see its dependencies clearly: which processes are held by one person, which teams are operating on undocumented assumptions and where the highest-risk gaps sit.

The result is not just better documentation. It’s an operational foundation that your AI programme can actually build on, and that stays with your organisation regardless of who walks out the door.

The question to ask before your next AI investment

Before your organisation commits further budget to automation or AI deployment, one question is worth asking clearly: do we actually understand how the work we are trying to automate gets done?

Not how it’s supposed to get done. How it actually gets done, by the people doing it, including the judgment calls, the workarounds and the decisions that have never been written down.

If the answer is anything other than a confident yes, the investment case deserves a harder look. Automating a process you have not mapped does not make you more efficient. It makes your inefficiencies faster and harder to find.

Sugarwork captures the tacit knowledge your business runs on through structured video interviews with the people who hold it, converts that knowledge into structured assets your team can use, and maps where your operational knowledge is most at risk. 

The organisations that come out ahead won’t be the ones that deployed AI fastest. They’ll be the ones that knew what they had before they tried to automate it.

Next
Next

Where does the knowledge go when 8,000 people walk out the door?