It is only a matter of time before a client asks you: “What is your AI policy?”
Some clients already are. Particularly in regulated industries (finance, healthcare, legal), procurement teams are adding AI usage questions to their vendor assessments. If you do not have an answer, you look unprepared. If you say “we do not use AI,” you either look behind or dishonest.
The agencies that handle this well have a clear, honest policy. Here is how to build one.
What your policy should cover
A practical agency AI policy does not need to be a 20-page document. It needs to cover five areas:
1. Where you use AI. List the categories of work where AI is part of your process. Research, first-draft content, data analysis, reporting, internal admin. Be honest and specific. Clients respect transparency.
2. Where you do not use AI. Equally important. Strategic recommendations, creative direction, client communication, and final quality checks should all be explicitly human-led. This reassures clients that your thinking is not outsourced.
3. Quality control. Describe your review process. Every AI output should be reviewed, refined, and approved by a human before it reaches the client. Explain how you ensure accuracy and brand consistency.
4. Data handling. This is the one clients care about most, and it overlaps heavily with AI and data privacy for agencies. How do you handle client data in relation to AI tools? Do you use tools that train on client inputs? Do you anonymise data before processing? Are client files uploaded to third-party AI services?
Most enterprise AI tools (Claude, ChatGPT Team/Enterprise, Copilot for Business) do not train on customer data. State which tools you use and their data policies.
5. Intellectual property. Who owns the output when AI is involved? In most jurisdictions, AI-generated content is not independently copyrightable, but work that is substantially human-directed and edited generally is. Your policy should clarify that all deliverables are the client’s property, produced under your creative direction using AI-assisted workflows.
How to present it to clients
Do not wait for clients to ask. Include a brief AI statement in your proposals and client onboarding materials. Something like:
“We use AI tools as part of our workflow to enhance research, analysis, and production efficiency. All strategic recommendations and creative direction are human-led. Client data is processed using enterprise-grade tools that do not train on your information. Every deliverable is reviewed and approved by our senior team before delivery.”
This positions you as modern, transparent, and professional. Hiding AI usage and being discovered later damages trust far more than being upfront about it.
The EU AI Act
The EU AI Act reaches full applicability in August 2026. If you work with EU-based clients or operate in the EU, you need to be aware of the transparency requirements. The Act requires disclosure of AI usage in certain contexts, particularly content generation and automated decision-making.
For most agency work, the requirements are straightforward: be transparent about AI usage and maintain human oversight. But the details matter, and the penalties for non-compliance are significant.
If you have EU clients, review the Act’s requirements for your specific services. The Investment Association and BIMA have published agency-specific guidance that is worth reading.
Why this matters now
The agencies that establish clear AI governance now will win in two ways. First, they will be ready when clients ask. Second, they will build a reputation for responsible AI usage that becomes a competitive advantage as regulation increases.
The agencies that wait will find themselves scrambling to create a policy under pressure, probably after a client raises concerns about something they have already done. For the broader change management challenge, see our guide on AI adoption beyond training.
This is part of Delivery Notes, a series on implementing AI inside your agency. Subscribe to the newsletter to get new articles weekly.