What should be automated, partially automated, or never automated?
One of the most valuable things an organisation can do right now is make explicit decisions about what to hand to AI, what to augment with AI, and what to protect from AI.
10 February 2025 · 3 min read
One of the most common mistakes organisations make with AI is treating it as a uniform capability that should be applied everywhere possible. The result is either analysis paralysis (“we need to evaluate every process”) or indiscriminate adoption (“let’s AI everything and see what sticks”).
A more useful approach is to make deliberate, explicit decisions about three categories: what should be fully automated, what should be augmented, and what should remain human-led.
Should be automated
Tasks in this category are repetitive, rule-based, and high-volume. The inputs are well-defined. The outputs are measurable. Speed and accuracy matter more than judgement.
Examples:
- Data entry and format conversion
- Routine report generation
- First-pass categorisation and tagging
- Compliance checking against known rules
- Scheduling and calendar coordination
- Invoice processing and reconciliation
The decision-making here is: when a task requires primarily pattern-matching against known patterns, AI will do it better, faster, and cheaper than a human. The value of human time is better spent elsewhere.
Should be partially automated (human + AI)
This is the largest and most strategically important category. These are tasks where AI dramatically improves human performance but human judgement, context, or accountability remains essential.
Examples:
- Drafting documents, emails, and proposals (AI drafts, human refines and owns)
- Research synthesis and summarisation (AI aggregates, human interprets and decides)
- Customer support escalation triage (AI categorises and prioritises, human resolves)
- Code generation (AI writes, human reviews and is accountable)
- Meeting notes and action items (AI captures, human validates and decides)
The principle here is human-in-the-loop for judgement and accountability. AI handles the mechanical burden; humans provide the contextual wisdom and own the outcome.
Should not be automated
Some tasks should remain fundamentally human — not because AI cannot do them, but because the value of those tasks is inseparable from human involvement.
Examples:
- Strategic leadership decisions with significant ethical implications
- Complex negotiations requiring deep relationship trust
- Performance conversations and coaching
- Crisis communication requiring genuine empathy
- Creative work where originality and human perspective are the point
- Decisions where accountability cannot be delegated to a machine
This is not about capability. In many of these domains, AI can already produce plausible outputs. The issue is that the human doing the task is part of what creates value — the relationship, the accountability, the genuine care.
Making this practical
The most valuable thing we help organisations do is run a structured process to map their key processes against these three categories. The output is not a technology roadmap — it is a set of explicit decisions about where human time and energy should go.
This has a secondary benefit: it forces conversations about what humans in the organisation are actually for. That is an uncomfortable conversation. It is also an essential one.
The organisations that get this right do not just save money on automation. They refocus their human talent on the things that only humans can do well — and they tend to be much better at those things as a result.
Want to talk through this?
Book a free discovery call with the Berst Consulting team.