Every month I have a call with a company that wants to automate something. And at least once a month, my honest answer is: don't.

Not "don't yet." Not "come back in six months." Just: this isn't the right problem to automate, and building AI on top of it will make things worse, not better.

This is genuinely hard to say because it's not what people want to hear. They've read about AI. They've seen competitors automate things. Someone on the board has asked why they're not doing more with AI. The pressure to automate is real, and the question "should we?" is rarely asked.

So here's the framework I use before we take on any project — the five situations where AI automation will either fail, deliver no real value, or create maintenance problems that eat your savings.

1. The process isn't systematised yet

Before you can automate a process, you need to understand it well enough to describe it precisely. This sounds obvious. It isn't.

I've had clients come to me wanting to automate their customer onboarding workflow, only to discover — three conversations in — that the onboarding workflow is completely different depending on who's doing it. Two people on the same team follow different steps, make different exceptions, and use different criteria to decide when a customer is "actually onboarded." There is no single process. There are five processes wearing a trench coat.

AI doesn't resolve underlying process confusion — it bakes it in permanently, at scale.

If you can't write down the process clearly enough that a new hire could follow it from day one, you're not ready to automate it. First, standardise. Then automate.

The tell-tale sign: when you ask five people who do the process to describe it, you get five different answers. The fix is a process audit, documentation, and at least one full cycle of the standardised process running manually before any automation is built. We've done this as part of scoping before. It takes 2–3 weeks. It's not glamorous. It's also the difference between automation that works and automation that gets turned off after a quarter.

2. A simpler system would work just as well

AI — especially LLM-based AI — is not the cheapest or most reliable tool for every job. Sometimes a rule-based system is genuinely better.

Here's a concrete example. A company wanted AI to classify inbound customer requests into five categories: billing, technical support, product feedback, cancellation, and general enquiry. That sounds like a good use case for NLP, right?

Except 90% of their emails contained words that mapped directly to one category. "Refund", "invoice", "overcharge" → billing. "Not working", "error", "bug" → technical support. A list of 50 keywords and a simple if-then classifier would have handled 90% of cases, run instantaneously, cost essentially nothing, and been debuggable by anyone on the team.

They wanted the LLM version. The LLM version cost 40× more per classification, added 2–3 second latency, occasionally hallucinated a sixth category, and needed retuning whenever the product changed. For the 10% of ambiguous emails, a human review queue would have been fine.

Quick decision test

Should you use an LLM for this?

Probably not — consider rules/heuristics first
A fixed set of known patterns maps to outcomes
Volume is low (<1,000/day) and errors tolerated
Decisions are binary (yes/no, approve/reject)
The categories are fully enumerable upfront
AI is probably the right tool
Unstructured text with high variation
Context and nuance change the outcome
High volume with complex, open-ended outputs
Patterns aren't enumerable in advance

The rule here is simple: start with the least intelligent solution that could work. If a regex works, use a regex. If a lookup table works, use a lookup table. Only add AI when simpler approaches demonstrably fall short.

3. The required accuracy is effectively 100%

99% accuracy sounds almost perfect. At 10,000 transactions a day, that's 100 errors per day. At €50 average remediation cost per error, that's €5,000 a day in cleanup costs — €1.8 million a year. For a lot of processes, that math doesn't work.

Some processes are not tolerant of errors at any realistic rate. Legal contract execution. Medical dosing calculations. Financial transactions where an incorrect amount triggers irreversible payment. Regulatory filings where a wrong checkbox means a compliance breach.

For these, you don't want AI making the call. You want AI to assist — to surface the relevant information, flag anomalies, pre-populate the obvious fields — while a human makes the actual decision and confirms the output. That's a very different system architecture, and a very different cost profile, than full automation.

We build human-in-the-loop systems for exactly this reason. The automation handles 80% of the cognitive work. The human handles the last mile. The error rate on the output is what the human delivers, not what the AI delivers.

4. The data problem is the actual problem

I can't count how many times a company has said "we want to automate X" and after a week of investigation we've found that the actual blocker isn't the automation — it's that the data feeding the process is unreliable, incomplete, or inconsistent.

You want to automate your invoice processing. That makes sense. But your suppliers send invoices in 40 different formats, some of them scanned handwritten notes, some of them emailed spreadsheets, some of them PDFs where the fields don't match what your ERP expects. Those are addressable problems. But if the data at the source has fundamental quality issues — duplicate supplier records, inconsistent line-item coding, invoices that reference POs that don't exist — adding AI extraction on top of that doesn't solve anything. It automates the mess.

Automating bad data doesn't clean it up. It processes garbage faster, at scale, and deposits it deeper into your systems.

The diagnostic question here is: if a smart human worked this process full-time, what percentage of cases would they still need to escalate or query back upstream? If the answer is more than 15–20%, the data quality problem almost certainly needs addressing before the automation does. We often spend the first phase of a project just mapping the data pipeline and identifying where the mess originates.

5. The ROI genuinely doesn't justify it

This one is uncomfortable, but it needs saying. Some processes simply don't have the volume or error cost to make automation investment economically sensible.

We had a company approach us about automating a monthly report that took one person four hours to produce. At 12 hours a year, even a generous loaded salary of €50/hr, that's €600 in annual labour cost. A competent automation engineer costs more than that just in setup time. Even if we built it for free, maintaining it over three years would cost more than just... letting Sarah do the report.

The minimum viable ROI case for custom automation typically requires at least one of: high volume (hundreds of instances per week), high per-error cost, or significant hours per week being redirected. If the process doesn't meet that bar, the honest answer is that it doesn't warrant custom AI — and a good off-the-shelf tool or a simple script might be all you need.

What you should actually do instead

In each of these five situations, there's a better first move:

  • Process isn't systematised: document it, standardise it, run one full manual cycle before touching automation.
  • Simple rule-based logic works: build the simple version first. Upgrade only when you can demonstrate the gap.
  • Near-100% accuracy required: build human-in-the-loop. Automate the preparatory work; keep humans on the decision.
  • Data quality is the problem: fix the data pipeline upstream. The automation is blocked, not blocked from being built, but blocked from working.
  • ROI doesn't stack up: find a simpler, cheaper tool, or accept that not everything needs automation.

None of these answers are satisfying when you came in wanting to automate. But the companies that get genuine long-term value from AI are the ones that were honest about this upfront — that chose the right problems to automate, not just the problems that sounded automatable.

We tell every client we talk to: if we can't make a clear ROI case for what we're proposing, we won't propose it. We'd rather send you away with a better question than take your money on a project that's going to fail.

That's the honest version of AI consulting. It isn't always comfortable, but it's the reason our systems are still running months later instead of quietly being turned off.

Not sure if your process is automatable?

That's literally what the first call is for. Describe what you're looking at and we'll give you an honest read — no deck, no proposal, no pressure.

Start a conversation