The single biggest mistake companies make when approaching AI isn't technical. It's organisational — and it happens before a single line of code is written.

For the last several years, a predictable pattern has played out in boardrooms across every industry. A leadership team reads about what a competitor has done with AI, gets excited, and commissions a project. The brief goes something like: "We want to use AI to improve our operations." Or worse: "We want to implement a large language model."

Both of those sentences describe a technology first. Neither describes a problem. And that ordering — technology before problem — is precisely why most AI strategies collapse.

Starting with the answer, not the question

When a company says "we want to use AI," they've already made the most consequential strategic decision — and they've made it without doing any of the analysis that should precede it.

The question isn't "how do we use AI?" The right question is: "where in our business does a decision get made slowly, inconsistently, or at high cost — and what would it be worth to fix that?" From that question, you work backwards. Sometimes the answer is AI. Often it's a simpler process fix. Sometimes it's both.

The companies we've seen succeed with AI overwhelmingly share one trait: they started with a specific, measurable business problem. Not "improve customer experience" — that's too vague to build toward. But "our claims processing takes 11 days on average, and our competitors do it in 3 — reduce that gap" is a real problem with a real benchmark.

"The best AI brief we ever received was two sentences long: here is what takes too long, and here is what it costs us per year. Everything else follows from that."

The data readiness trap

Even when a company does start with a real problem, the second failure mode arrives quickly: the assumption that their data is ready.

It never is. Not because companies are negligent, but because data quality requirements for AI are much stricter than for human decision-making. A human analyst can intuit context, fill in gaps, and weight incomplete information sensibly. A model cannot — at least not without explicit guidance that has to be designed, tested, and iterated.

In a typical engagement, our team spends the first 10–15% of its time on what we call the data audit: mapping what data exists, where it lives, how consistently it's collected, how it's labelled, and whether the labels mean the same thing across different parts of the business. Remarkably often, they don't. A "closed" customer ticket in one system means a resolved issue. In another, it means the customer stopped responding. Training a model on that combined dataset teaches it nothing useful.

This isn't a reason to delay AI work indefinitely. But it is a reason to budget explicitly for data preparation — which rarely appears in initial project scopes and almost always overruns when it's underestimated.

Misaligned success metrics

A model goes live. Now what? Too many organisations haven't defined what success actually looks like — and so they have no way to know whether the project worked.

"The model is 94% accurate" is not a business outcome. Depending on the use case, 94% accuracy might be outstanding or it might be catastrophic. A fraud detection model that catches 94% of fraud cases sounds impressive until you realise it's also incorrectly flagging 8% of legitimate transactions — and the cost of false positives exceeds the value of fraud prevented.

The metrics that matter are business metrics: processing time, cost per transaction, error rate, customer satisfaction, revenue impact. AI metrics are inputs to those numbers, not the numbers themselves. Every AI engagement should begin with a crisp definition of the business metric being moved, the baseline it starts from, and the target that would constitute success.

This sounds obvious. It is rarely done.

The build-vs-buy false choice

Once a company has defined a real problem and clear success metrics, a strategic question often surfaces: should we build this ourselves, or buy an off-the-shelf solution?

The framing is usually wrong. It's rarely a binary. The more useful question is: what is proprietary here?

If your competitive advantage comes from what you do with data — the decisions you make, the speed at which you make them, the quality of outcomes — then the AI layer that enables those decisions is a candidate for custom development. Outsourcing it means outsourcing your edge.

If, on the other hand, the AI is enabling a commodity function — document parsing, meeting transcription, basic classification — then a bought solution is almost certainly the right answer. Building a document parser from scratch when several excellent ones exist commercially is not a strategic investment; it's a distraction.

The companies that consistently get this right are the ones that have been honest with themselves about where their differentiation actually sits.

What to do instead

The pattern we recommend is deceptively simple, but genuinely hard to execute in organisations that are under pressure to show AI progress:

  • Start with a problem inventory. Audit the ten most expensive, slow, or inconsistent processes in your business. Rank them by value of improvement and feasibility of AI involvement.
  • Define success before scoping the solution. For each candidate, write down the metric being moved, the current baseline, and the minimum threshold that would justify investment.
  • Run a data audit before a technology selection. Understand what data you have, what you need, and the gap between them. Factor this into your timeline and budget explicitly.
  • Choose your scope deliberately. Decide what's worth building custom versus buying — based on where your competitive advantage lies, not based on what's technically interesting.
  • Instrument from day one. Build measurement into the solution from the start, not as an afterthought. If you can't measure whether it's working, you can't improve it — and you can't justify the next project.

None of this is revolutionary. But in our experience, the companies that follow it consistently — even when it slows the initial sprint — end up with AI systems that compound in value over time. The companies that skip it end up with expensive experiments that get quietly shelved.

The strategy doesn't fail during deployment. It fails at the first meeting, when someone says: "we want to use AI," and nobody asks why.