Every agency using AI is handling data. The question is whether they are handling it properly.
Most are not. We see agencies pasting client analytics into ChatGPT, uploading customer lists to free AI tools, and running client content through platforms with no clear data processing terms. It works until a client asks where their data goes. Or worse, until a regulator does.
This is not a legal article. It is a practical guide to getting your data handling right before it becomes a problem.
What you can and cannot put into AI tools
The first thing to understand: when you use a consumer AI tool (the free version of ChatGPT, for example), the data you input may be used to train future models. That means anything you paste in could, in theory, influence outputs for other users.
Generally safe to input:
- Publicly available information (website content, published reports, news articles)
- Your own agency’s internal processes and templates
- Generic industry data and benchmarks
- Anonymised or aggregated data with no personally identifiable information
Treat with caution:
- Client brand guidelines and strategy documents (check your contract)
- Unpublished creative work or campaign concepts
- Competitive analysis containing proprietary data
- Financial projections or business plans
Never input without explicit consent:
- Customer personal data (names, emails, phone numbers, addresses)
- CRM exports or customer lists
- Analytics data with user-level detail
- Health, financial, or other sensitive personal data
- Anything covered by an NDA
The line is straightforward: if the data belongs to or identifies a real person, do not put it into a consumer AI tool.
Enterprise vs consumer AI accounts
This distinction matters more than most agencies realise.
Consumer accounts (free or personal paid tiers of ChatGPT, Claude, Gemini): Your data may be used for model training. You typically have limited control over data retention. Privacy protections are minimal.
Enterprise and business accounts (ChatGPT Team/Enterprise, Claude for Business, Google Workspace with Gemini): Your data is not used for training. You get data processing agreements. Retention policies are configurable. Audit logs are available.
The cost difference is small. Upgrading your team to business-tier AI accounts costs £20-30 per person per month. For an agency handling client data, this is not optional. It is the baseline.
Action: Move your entire team to business-tier accounts. Today. Do not wait for a client to ask. The reputational risk of a data incident far outweighs the monthly cost.
GDPR considerations
If you operate in the UK or EU (or handle data of UK/EU residents), GDPR applies to your use of AI. Here is what matters in practice.
Lawful basis for processing
Using AI to process personal data requires a lawful basis, just like any other data processing. For most agency use cases, this falls under “legitimate interests” or “contractual necessity.” But you need to document it.
Data processing agreements
If you are using AI tools to process client data, those tools are your data processors. You need a Data Processing Agreement (DPA) with each AI provider you use. Most enterprise-tier AI tools offer standard DPAs. Download them, review them, and keep them on file.
Data minimisation
Only input the data you actually need. If you are analysing campaign performance, you do not need individual user names. Aggregate the data first. Strip out personal identifiers before feeding anything into an AI tool.
Data transfers
Most AI tools process data in the US. Under GDPR, transferring personal data outside the UK/EU requires appropriate safeguards (typically Standard Contractual Clauses). Check that your AI providers have these in place.
Records of processing
Maintain a record of what data you process through AI tools, why, and where. This does not need to be complex. A spreadsheet listing each AI tool, what data it processes, the lawful basis, and the DPA status covers it.
Where data goes: the flow you need to understand
When you input data into an AI tool, it typically flows through several stages:
- Transmission. Your data travels from your device to the AI provider’s servers. This should always be encrypted (HTTPS).
- Processing. The AI model processes your input and generates a response. This happens on the provider’s infrastructure.
- Storage. Your input and the response are stored for some period. On consumer accounts, this can be indefinite. On business accounts, you can control retention.
- Training (consumer only). On consumer accounts, your data may be added to the training dataset for future model versions.
The critical question for each tool: Does my data leave step 3, or does it reach step 4? If it reaches step 4, that tool should not touch client data.
Building an AI data handling policy
Every agency using AI needs a written policy. It does not need to be 50 pages. One to two pages covering the essentials will do.
What to include
Approved tools. List the AI tools your team is allowed to use, with the tier (consumer or business). Ban the use of unapproved tools for client work.
Data classification. Define three categories:
- Public: Can be used freely in any AI tool.
- Confidential: Can only be used in business-tier AI tools with DPAs in place.
- Restricted: Cannot be used in any external AI tool without explicit client approval.
Client communication. A standard paragraph for client contracts or SOWs that explains your AI usage and data handling practices. Be transparent. Clients respect honesty about how you work.
Incident response. What happens if someone inputs restricted data into an unapproved tool? Who do they tell? What steps are taken? Having a process prevents panic.
Training. Ensure every team member understands the policy. Run a 30-minute session when someone joins and a refresher every six months. This should be part of your broader AI training programme.
A template paragraph for client contracts
Here is a starting point:
“We use AI tools in our delivery processes to improve efficiency and output quality. All AI tools used for client work operate under business-tier agreements with data processing terms that prevent your data from being used for model training. We do not input personally identifiable data into AI tools without your explicit consent. Our AI data handling policy is available on request.”
Adapt it to your specifics, but the principle is: be upfront about the fact that you use AI, and reassure clients that their data is handled properly.
The competitive advantage of getting this right
Most agencies are ignoring data privacy in their AI adoption. That creates an opportunity.
When you can show a prospective client a written AI data handling policy, business-tier tool agreements, and DPAs on file, you stand out. Enterprise clients and regulated industries (finance, healthcare, legal) will not work with agencies that cannot demonstrate data governance.
Getting your data handling right is not just risk management. It is a sales asset. It also feeds directly into your AI governance framework, which more enterprise clients are asking to see before signing.
This is part of Delivery Notes, a series on implementing AI inside your agency. Subscribe to the newsletter to get new articles weekly.