So you’re using AI in your business, or you’re thinking about it. That’s great. But the question that keeps many small business owners awake at night is: how do I make sure I’m doing this compliantly?
The good news is that GDPR doesn’t stop you from using AI. The reality is that you just need to be thoughtful about how you use it, and transparent about what you’re doing. Let me walk you through the practical steps you need to take.
Update your record of processing activities
Every business needs to maintain a record of processing activities (often called a ROPA). This is a mandatory requirement under GDPR, not optional. It’s essentially a map of what data you hold, why you hold it, and what you do with it.
When you start using AI, your ROPA needs to reflect this. You need to document:
- What AI tools you’re using
- What data you’re putting into them
- Why you’re using AI for this purpose
- How the AI processes that data
This isn’t about creating mountains of paperwork. It’s about being clear and honest about what’s happening in your business. When you can see it written down, it’s much easier to spot potential issues before they become problems.
Think of your ROPA as a living document. Every time you adopt a new AI tool or change how you use an existing one, update your record. Make this part of your process, not something you do once and forget about.
Sort out your privacy notice
Your privacy notice tells people what you do with their data. If you’re using AI, especially if it involves customer data, this needs to be clear in your privacy notice.
You’re already required to state if you use automated decision making (which often uses algorithms). But if you’re using generative AI, large language models, or tools like Microsoft Copilot, you should be mentioning these too.
Now, here’s my recommendation: rather than burying AI information in your main privacy notice, create a standalone AI policy or statement for your website. Keep it alongside your privacy notice, but give it its own space.
Why do I suggest this? Because AI use changes quickly. You might try a tool, realise it’s not right, and switch to something else. You might add new features or capabilities. Having a separate AI statement means you can update it easily without having to rewrite your entire privacy notice every time something changes.
Your customers, prospects, and anyone who cares about how you handle data can then review your AI use clearly. It shows you’re being open about what you’re doing, and that builds trust.
Get your contracts sorted
Contracts are absolutely crucial when it comes to AI. You need clear agreements with three groups of people:
- Your team (employees and freelancers)
- Your suppliers
- Your customers
Let me give you a practical example. Say you have a freelance writer who creates blog content for you. Are you happy for them to use AI to write those blogs? Maybe you are, maybe you’re not. But you need to be clear about it in their contract.
If they’re creating content using AI and you’re presenting it to your clients as unique, human-written content, that’s a problem. You need to know what’s what, and contracts are how you make that clear.
Here’s another crucial point: copyright. Who owns content created by AI? You need to check the terms and conditions of any AI tool you use, because copyright might not automatically sit with you.
If you’re a designer, writer, or anyone creating content as part of your service, this matters enormously. You can’t sign over copyright to your client if you don’t own it in the first place. Your contracts need to reflect the reality of how that content was created.
Put proper controls in place
Contracts are one form of control, but you need others too. Think about:
- Staff policies – Do your employees know what they can and can’t do with AI? Have you trained them on your expectations?
- Shadow AI – This is when team members use their own personal AI tools and bring that content into your business. How will you manage this? How will you even know it’s happening?
- Permission and access levels – If you’re using something like Copilot in your Microsoft environment, you need to look carefully at permissions. Copilot doesn’t automatically understand that some data should be restricted.
Let me share a real example. I worked with a client whose receptionist used Copilot to create a report. Because the permissions weren’t set up properly, she ended up seeing profit and loss figures and salary information that should have been visible only to board level. The AI didn’t know she shouldn’t see that data, it just responded to her request.
This isn’t about not trusting your team. It’s about making sure the right information stays with the right people. You might need IT support to set this up properly, but it’s worth doing.
Get your legal basis right
Under GDPR, you need a legal basis for processing data. When AI is involved, you need to think carefully about which basis applies. Here are the four most relevant ones:
- Consent – If you’re putting customer data through an AI tool, your customers need to know and agree. This can’t be hidden in small print. It needs to be clear, informed, and actively given.
- Contract – If using AI is necessary to deliver your service, this might be your legal basis. But you need to be genuine about this, it has to be truly necessary, not just convenient.
- Legitimate interest – This is trickier with AI. You need to do a proper balancing test. What’s in it for the customer? What are the risks? Are you doing a proper risk assessment? The legitimate interest rules are changing, so keep an eye on developments here.
- Legal obligation – Depending on your industry and where your customers are based, you might have specific legal requirements about AI use. Financial services, healthcare, and businesses working internationally need to be particularly aware of this.
Think about the fundamentals
GDPR is built on some core principles, and they all apply when you’re using AI:
Minimisation – Only collect and use the data you actually need. AI can analyse massive amounts of data, but that doesn’t mean you should feed everything into it.
Security – Who can access your AI tools? What security does the tool provider have in place? How do you keep the data you’re putting into AI secure?
Transparency – Are your staff clear on your AI policies? Do your customers understand how you use AI? If not, you’ve got work to do.
Individual rights – People have eight rights under GDPR, including the right to access their data and know how you’re using it. If AI is part of that picture, you need to be able to explain it.
Data deletion – This is a big one. Most large language models don’t let you delete data once you’ve put it in. But GDPR says you should only keep data for as long as necessary and have proper deletion cycles. How are you going to manage this tension?
Making it work in practice
I know this sounds like a lot, but here’s the truth: GDPR isn’t about having perfect policies sitting in a folder somewhere. It’s about building good practices into how you actually work, day to day.
This is what’s called privacy by design. It means you think about data protection as you go, not as an afterthought. When you’re considering a new AI tool, you ask the GDPR questions at the same time. When you’re writing contracts, you include the AI clauses from the start.
The businesses that do this well aren’t the ones with the most elaborate policies. They’re the ones where everyone understands what’s expected, where good practice is just “how we do things here.”
Your customers don’t need to see eight lever arch files of policies. They need to understand what you do with their data, including AI use, in clear, straightforward language. Your team doesn’t need a SharePoint folder they never open. They need practical guidance they can actually follow.
Your next steps
Start with an audit. What AI are you using? What data goes into it? Who has access? What do your current contracts say?
Then identify the gaps. Where do you need to update documents? Where do you need better controls? What training does your team need?
Finally, create a simple plan to address those gaps. Don’t try to do everything at once. Pick the most important items and work through them systematically.
Remember, the goal isn’t perfection. The goal is to run your business in a way that’s transparent, fair, and compliant. When you get the foundations right, using AI becomes much less stressful and much more beneficial.