All insights
Margin Watch 7 February 2026 · 7 min read

The AI skills gap in agencies: where teams struggle most

Most agency teams can use AI tools. Far fewer can use them well. Here are the five skill gaps holding agencies back and how to close them before your competitors do.

Your team has AI tools. They have had training. They can write a prompt and get an output. But there is a significant difference between using AI tools and using them well, and that difference is where agencies are losing time, quality, and competitive advantage.

The AI skills gap in agencies is not about access to tools. Nearly every agency has ChatGPT, Copilot, or some combination of AI subscriptions. The gap is in the ability to use those tools effectively, consistently, and strategically. And it is widening.

The difference between using AI and using it well

A copywriter who asks ChatGPT to “write a blog post about social media trends” is using AI. A copywriter who provides a detailed brief with audience context, brand voice guidelines, specific angle, structural requirements, and examples of good output is using AI well. The first gets generic content that needs a complete rewrite. The second gets a usable first draft that needs 20 minutes of editing.

The difference in output quality is enormous. The difference in time saved is the gap between “AI does not really work for us” and “AI has transformed our production.”

This pattern repeats across every role in the agency. The tools are the same. The skills are not. And the agencies that close this gap first will outperform those that do not.

The five common skill gaps

After working with dozens of agencies on AI adoption, we see the same five gaps appearing consistently:

1. Prompt engineering

This is the most visible gap and the one agencies tend to focus on, but it goes deeper than most realise. Basic prompt writing (telling AI what you want) is relatively easy to learn. Advanced prompt engineering (structuring inputs for consistent, high-quality outputs across different contexts) is a genuine skill.

Where agencies struggle:

  • Writing prompts that produce consistent output quality, not just occasional good results
  • Structuring complex prompts with context, constraints, and examples
  • Building reusable prompt templates that work for different team members
  • Iterating on prompts systematically rather than starting from scratch each time

What good looks like: A shared prompt library with documented, tested templates for common tasks. Each template includes context fields, quality criteria, and example outputs. New team members can produce good results on day one because the prompt does the heavy lifting. For a practical starting point, our prompt engineering guide covers the fundamentals.

2. Workflow design

Individual prompts are useful. AI-powered workflows are transformative. The gap between the two is workflow design: the ability to think about how AI fits into a multi-step process, not just a single task.

Where agencies struggle:

  • Identifying which steps in a process AI should handle vs. humans
  • Connecting AI outputs to the next step in the workflow (rather than copy-pasting between tools)
  • Building workflows that other team members can follow without deep AI knowledge
  • Automating repetitive sequences rather than doing them manually each time

What good looks like: A client onboarding process where AI automatically analyses the brief, generates a competitive landscape, drafts a project plan, and prepares meeting talking points. The human reviews, refines, and presents. The whole thing takes 2 hours instead of a full day. Our piece on implementing AI in your agency covers workflow design in more depth.

3. Quality review

This is the gap that creates the most risk. AI outputs are confident and fluent, which makes them easy to accept uncritically. The skill of reviewing AI output effectively (catching errors, identifying gaps, spotting hallucinations, and assessing appropriateness) is underdeveloped in most agency teams.

Where agencies struggle:

  • Distinguishing between AI output that sounds right and AI output that is right
  • Checking facts, figures, and claims that AI presents with complete confidence
  • Evaluating whether AI-generated creative work meets brand standards
  • Knowing when to reject AI output entirely and do the work manually

What good looks like: A systematic review process. Every AI output gets checked against source material. Claims are verified. Creative work is evaluated against the brief and brand guidelines, not just grammar and fluency. The team knows that AI review is a skill, not a formality. This should be built into your AI governance framework.

4. Data literacy

AI tools are increasingly powerful for data analysis, research, and insight generation. But using them well requires a basic understanding of data: what questions to ask, how to interpret results, and when the data is misleading.

Where agencies struggle:

  • Asking AI meaningful analytical questions (not just “analyse this data”)
  • Understanding the limitations of AI-generated analysis (correlation vs. causation, sample size, bias)
  • Combining AI analysis with human judgement to produce genuine insights
  • Presenting data-driven findings to clients in a way that drives decisions

What good looks like: A strategist who uses AI to process large datasets, identify patterns, and generate initial hypotheses, then applies their expertise to determine which patterns are meaningful, what the implications are, and what the client should do about it. The AI does the processing. The human does the thinking.

5. Strategic application

The highest-level gap. Most agency staff can use AI for tactical tasks (writing, research, summarisation). Far fewer can think strategically about where AI creates the most value across the agency or for a client.

Where agencies struggle:

  • Identifying which client problems AI can solve (not just which tasks it can speed up)
  • Proposing AI-enhanced services to clients as part of strategic recommendations
  • Thinking about AI at the business model level, not just the task level
  • Understanding how AI changes competitive dynamics in the client’s industry

What good looks like: A strategist who proactively identifies that a client’s content operation could be restructured around AI, proposes a new service model, and delivers it at higher quality and lower cost than the previous approach. This is where AI skills become revenue-generating, not just cost-saving.

How to assess your team’s current level

You cannot close a gap you have not measured. Here is a simple assessment framework:

Level 1: Aware. Can describe what AI tools do. Has not used them meaningfully for work.

Level 2: Basic user. Uses AI for simple, one-off tasks. Results are inconsistent. Does not have a systematic approach.

Level 3: Competent user. Uses AI regularly for defined tasks. Has reliable prompts. Reviews output critically. Saves measurable time.

Level 4: Workflow builder. Designs multi-step AI workflows. Builds tools and templates others can use. Thinks in systems, not individual prompts.

Level 5: Strategic operator. Applies AI at the business level. Identifies new service opportunities. Drives competitive advantage through AI capability.

Assess each team member across the five skill areas. You will likely find most people at Level 2-3 for prompt engineering and Level 1-2 for everything else. That is normal. It tells you where to focus.

Building a skills development plan

Once you have your assessment, build a plan:

For the whole team (Levels 1-3): Focus on prompt engineering and quality review. These are the foundational skills that everyone needs. Run structured workshops with real work, not theoretical exercises. Build a prompt library. Establish review standards.

For high-potential individuals (Levels 3-4): Invest in workflow design and data literacy. These people become your AI champions, the ones who build systems that lift the whole team’s capability.

For senior leadership (Level 4-5): Develop strategic application skills. This is about understanding AI at the business model level, how it changes agency economics, pricing, service design, and competitive positioning.

Set a timeline. Within 90 days, your whole team should be at Level 3 minimum. Within six months, you should have 3-5 people at Level 4. Within a year, your leadership team should be operating at Level 5. This maps to the change management approach that turns training into daily habits.

The risk of not closing the gap

This is not optional. The agencies that close the AI skills gap will:

  • Deliver faster without sacrificing quality
  • Price more competitively because their cost to deliver is lower
  • Win more pitches because they can demonstrate AI-enhanced capability
  • Retain better talent because skilled people want to work somewhere that takes AI seriously

The agencies that do not close the gap will find themselves competing against agencies that produce better work in less time at lower cost. That is not a comfortable position.

Within two years, AI proficiency will be a baseline expectation, like knowing how to use email or a project management tool. The agencies that develop these skills now will be the ones setting the standard. The rest will be catching up.

Start with the assessment. Build the plan. Close the gap. Your competitors already are.


This is part of Margin Watch, a series on how AI is reshaping the business of running an agency. Subscribe to the newsletter to get new articles weekly.

Connor

Written by Connor

Founder of Augmented Agency. Built and sold a £2.2M agency. Now helps agency owners implement AI.

Want insights like this every week?

The Agency AI Briefing. Free, weekly, no spam.

Subscribe to the newsletter