Data Privacy and Security

Governance gives you the framework. Now you need the specifics. What do GDPR and the EU AI Act actually require? And how do you meet those requirements in ServiceNow?

This isn't theoretical compliance box ticking. Get this wrong and you're looking at fines that can reach four percent of global turnover. More importantly, you're risking the trust of the people whose data you're processing.

GDPR in the AI Context

GDPRarrow-up-right wasn't written with AI in mind, but its principles apply directly.

Data minimisation means only using what you actually need. If your incident summarisation skill works perfectly well without customer phone numbers, don't include them in the prompt. Every piece of personal data you feed to AI is personal data you need to justify processing. Less is safer.

Purpose limitation means using data only for what you said you'd use it for. If you collected contact details for service delivery, using them to train AI recommendations might exceed that purpose. Check your privacy notices. Check what people consented to.

Individual rights don't disappear because AI is involved. People can still request access to their data. They can still request deletion. If your AI has processed their information, you need to know where that happened and what was done with it. Your logging strategy matters here.

Transparency means people should know when AI is involved in decisions about them. If an AI summarised their HR case, they have a reasonable expectation to know that. Hidden AI feels deceptive even when it's not.

This is newer and specifically targets AI systems. It classifies applications by risk level and imposes requirements accordingly.

High risk systems face the strictest requirements. Think AI that affects employment decisions, credit scoring, or access to essential services. HR case management might fall here depending on how it's used. High risk means mandatory conformity assessments, detailed documentation, human oversight requirements, and ongoing monitoring.

Limited risk systems need transparency. Users must know they're interacting with AI. Chatbots, for instance, need to identify themselves as artificial.

Minimal risk systems face few additional requirements. Most internal productivity tools fall here. Incident summarisation for agents is probably minimal risk. But confirm that assessment with your legal team rather than assuming.

The tricky bit is that classification depends on use, not technology. The same Now Assist skill could be minimal risk in one context and high risk in another. Document your assessment and reasoning.

ServiceNow's Privacy Tools

ServiceNow provides specific capabilities for handling sensitive data. Use them.

Data Privacy for Now Assistarrow-up-right masks personally identifiable information before it reaches the language model. You configure patterns that identify sensitive data. The system replaces real values with synthetic placeholders before processing, then restores them in the output. The AI never sees the actual personal data.

Several masking techniques are available. Synthetic replacement swaps real data for coherent but fake values. Static replacement uses fixed placeholders. Partial replacement obscures most of a value while keeping some digits visible for context. Pick the right technique for each data type.

Sensitive Data Handlerarrow-up-right provides another layer. It sits in the conversational interfaces settings and automatically identifies sensitive information. You can customise the rules to match your specific requirements. What counts as sensitive varies by industry and jurisdiction.

The Defence in Depth Model

Don't rely on a single control. Layer them.

Role based access controls determine who can trigger AI features and what data they can access. Someone shouldn't be able to use AI to see records they couldn't see without it.

Retrieval augmented generation limits what knowledge the AI can reference. It can only pull from sources you've configured, not from anywhere in your instance.

Data masking protects specific fields regardless of who's accessing them. Even authorised users don't see raw sensitive values in AI outputs.

Each layer catches what the others might miss. That's the point.

Beyond Europe

GDPR and the EU AI Act get the headlines, but they're not the only regulations that matter.

If you operate in California, CCPA applies. Brazil has LGPD. Canada has PIPEDA. China has the PIPL. Many industries have sector specific requirements on top of general privacy law.

Your compliance team should map which regulations apply to which data and which processes. Don't assume European rules cover everything. Don't assume they cover nothing either.

Practical Steps

Review each AI use case against applicable regulations before deployment. Document your data flows. Know what personal data enters the AI, what processing occurs, and what outputs are generated.

Configure masking for any fields containing personal information. Test that masking works as expected. Edge cases catch people out. Names embedded in free text fields, for instance, might slip through pattern based detection.

Ensure your logging captures what regulators might ask for. Who triggered the AI? What data was processed? What output was produced? When did this happen?

Train your team on data protection requirements. Technical controls help, but people make decisions about what data to include and how to use outputs. They need to understand the rules.

The Ongoing Reality

Regulations evolve. The EU AI Act is still being interpreted. Enforcement priorities shift. What's acceptable today might face scrutiny tomorrow.

Build compliance into your processes rather than treating it as a one time checkbox. Regular reviews. Updated documentation. Continuous awareness. This isn't a problem you solve once and forget about.

Right, that wraps up strategic planning. You understand the assessment work needed, the use cases worth pursuing, the people who need involving, the governance to establish, and the regulations to follow.

Time to understand the architecture that makes all of this work.

Last updated