We spend most of our time helping agencies adopt AI. So it might seem strange to write an article about where AI should not be used. But this is the conversation that matters most right now.
The agencies getting AI wrong are not the ones ignoring it. They are the ones applying it everywhere without thinking about where it actually helps. They automate things that should not be automated. They save time on tasks where time is the point.
1. Client relationship management
This is the big one. The moment a client feels like they are being managed by a system rather than a person, the relationship starts to erode.
AI can draft emails. It can generate meeting agendas. It can summarise calls and suggest follow-up actions. All useful, behind the scenes. But the client-facing communication itself needs to be genuinely human.
Why it matters: agency relationships are built on trust, and trust is built on the feeling that someone understands you and is paying attention. AI-generated emails are often technically correct but emotionally flat. They lack the warmth, the specific references to last week’s conversation, the instinct to pick up the phone instead of sending another email.
We have seen agencies automate their client check-ins with AI-drafted updates. Open rates drop. Response rates drop. Within six months, the client starts taking calls from other agencies.
The rule: use AI to prepare for client interactions (research, summaries, talking points). Never use it to replace them.
2. Strategic thinking
AI is an optimisation engine. It finds patterns in existing data and suggests improvements based on what has worked before. That is useful for many things. It is not useful for strategy.
Strategy, real strategy, involves making choices about where to play and how to win. It requires understanding context, anticipating competitor moves, reading market shifts, and making bets on uncertain outcomes. AI cannot do any of this well because strategy is about deciding what not to do, and AI has no framework for sacrifice.
Ask AI to write a marketing strategy and you will get a comprehensive, well-structured document that covers every channel, every tactic, and every audience segment. It will be thorough. It will also be useless, because strategy that tries to do everything does nothing.
The value your agency provides is not the document. It is the thinking behind it. The decision to focus on three channels instead of eight. The instinct that a particular positioning will resonate. The experience that tells you this tactic worked three years ago but the market has moved on.
The rule: use AI for strategic research and analysis. Never use it to make strategic decisions.
3. Creative direction
AI follows patterns. Creativity breaks them.
This distinction matters enormously for agencies that sell creative work. AI can generate a thousand visual options, write a hundred headlines, and produce dozens of campaign concepts. But it generates them based on what has existed before. It interpolates between known points. It does not leap to something genuinely new.
The creative director who looks at a brief and has an instinct, a feeling that this campaign needs to go in an unexpected direction, is doing something AI cannot replicate. That instinct comes from years of experience, cultural awareness, and the kind of creative courage that involves risking being wrong.
Where this goes wrong in practice: agencies that use AI to generate campaign concepts and then pick the “best” one are selecting from a pool of derivative ideas. The output feels familiar because it is. It is a composite of everything that has come before, smoothed into statistical plausibility.
The most memorable campaigns break rules. AI does not break rules. It learns them.
The rule: use AI for creative production (variations, adaptations, format conversions). Keep creative direction firmly human.
4. Team culture and development
Some agencies have started using AI to write performance reviews, generate feedback, or even craft team communications. This is a mistake that costs more than it saves.
Team culture is built on authentic human connection. When a manager takes the time to write thoughtful feedback, notice someone’s growth, or craft a message that acknowledges a difficult week, that effort is the point. The time spent is not waste. It is investment.
An AI-generated performance review might cover all the right points. But the team member reading it can feel the difference. It lacks the specificity that says “I have been paying attention to you specifically.” It lacks the vulnerability that builds trust between managers and their teams.
The broader risk: if your team discovers that management communications are AI-generated, you lose credibility in a way that is very hard to recover. Trust, once broken by the feeling that leadership could not be bothered to write their own words, does not rebuild easily.
The rule: use AI to organise your thoughts (bullet points, structure, reminders of key events). Write the actual communications yourself.
5. Crisis communication
When something goes wrong, whether it is a client crisis, a public-facing mistake, or an internal issue, the stakes are too high for AI involvement in the response.
Crisis communication requires empathy, judgement, and the ability to read a situation in real time. It requires knowing when to apologise, when to explain, when to stay quiet, and when to escalate. These are human skills that depend on emotional intelligence and contextual awareness.
AI-generated crisis responses tend to be cautious, generic, and tone-deaf. They default to corporate language when the situation requires authenticity. They optimise for legal safety when the moment calls for genuine accountability.
A real example: a social media crisis for a client was escalating. The junior account manager used AI to draft a response. The AI produced a perfectly structured, legally cautious statement. The creative director rewrote it in two minutes with a direct, honest, human response that acknowledged the mistake. The crisis de-escalated within hours. The AI draft would have made it worse, not because it was wrong, but because it felt corporate in a moment that required honesty.
The rule: in high-stakes situations, keep AI out of the communication entirely. Speed matters less than getting it right.
The bigger principle
The thread connecting all five of these areas is the same: AI fails where human connection, judgement, and authenticity are the product, not just part of the process.
Agencies sell expertise, relationships, and creative thinking. AI enhances the delivery of those things. It does not replace them. The moment you start using AI as a substitute for the parts of your work that clients value most, you are undermining the very thing they pay you for.
The best agencies we work with ask one question before applying AI to any task: “Is the human effort here part of the value, or part of the cost?” If it is part of the cost (research, formatting, data processing, production), automate it. If it is part of the value (relationships, strategy, creative thinking, culture), protect it.
That distinction is simple. Getting it right is what makes an agency worth hiring.
This is part of Delivery Notes, a series on implementing AI inside your agency. Subscribe to the newsletter to get new articles weekly.