AI is here — ready or not
AI tools have become a natural part of work for millions of users. ChatGPT, Copilot, Claude and others have transformed how we work. But with opportunities come risks that many organisations have yet to address.
The fundamental question: Do you know which AI tools are being used in your organisation, by whom, and with what data?
AI risks to manage
Employees input sensitive data — customer information, trade secrets, code — into AI tools. Data may be stored, used for training, or leak in other ways.
Employees use AI tools that IT hasn't approved or even knows about. No control, no oversight, no risk management.
Malicious actors manipulate AI systems through specially crafted prompts. Can cause the system to reveal information or perform unwanted actions.
AI "makes up" information that looks credible but is incorrect. Can lead to wrong decisions if output isn't verified.
AI models can have built-in biases that affect results. Particularly risky when making decisions that affect individuals.
Dependence on AI suppliers creates new risks. What happens if the service changes, shuts down, or is compromised?
How AI affects NIS2 compliance
Risk management (Article 21.2a) AI risks should be included in your risk assessment. Which AI tools are used? What data is exposed? What threats does AI usage create?
Incident handling (Article 21.2b) AI-related incidents (data leakage, manipulation) should be handled according to your incident processes.
Staff training (Article 21.2i) Training should cover secure AI usage. Staff need to understand the risks.
Supplier security (Article 21.2d) AI suppliers are suppliers. The same requirements for assessment and follow-up apply.
Build an AI policy
- Inventory current state Which AI tools are used today? Officially and unofficially? What data is input? Start by understanding reality.
- Classify use cases Which use cases are acceptable? Code assistance, text processing, data analysis? What type of data may be used?
- Choose approved tools Evaluate and approve specific AI tools. Prioritise enterprise versions with better data protection. Create an "allowlist".
- Define data classification What data should never be input into AI? Personal data, trade secrets, customer data, source code? Be clear.
- Train staff Everyone using AI needs to understand the risks and rules. Make it practical and concrete.
- Monitor and follow up How do you know the policy is followed? Technical controls? Spot checks? Regular follow-up?
Checklist for responsible AI
Before using an AI tool:
- Is the tool on the approved list?
- Does the input contain sensitive data?
- Do you understand how data is handled by the supplier?
- Is there an enterprise agreement that protects data?
During use:
- Verify AI-generated content
- Don’t input sensitive information
- Review output for errors
- Follow organisational policy
Continuously:
- Inventory AI usage regularly
- Update policy as needed
- Train new employees
- Monitor for shadow AI
Common mistakes
Banning all AI usage doesn't work. Employees find ways around the ban. Controlled usage is better.
"We don't use AI" is rarely true. Employees use tools you don't know about.
AI usage is a business issue. HR, legal and management need to be involved.
The AI landscape changes rapidly. Policies must be updated continuously.
Practical tips
Start simple
You don’t need a perfect policy from day one. Start with basic guidelines and build on them.
Be pragmatic
Absolute bans create shadow AI. Allow usage under controlled conditions.
Communicate why
Explain the risks so employees understand. Engagement increases compliance.
Learn from incidents
When something goes wrong (and it will), learn from it. Update processes.
How Securapilot can help
Securapilot supports organisations in managing AI risks:
- Risk management — Include AI risks in your risk register
- Policy management — Document and distribute AI policy
- Training — Track completed training
- Incident management — Handle AI-related incidents
- Supplier assessment — Assess AI suppliers
Book a demo and see how we can help you manage AI security.
Frequently asked questions
What is shadow AI?
Shadow AI refers to AI tools that employees use without IT department approval. This could be ChatGPT, free AI services or embedded AI in other tools. The risk is that sensitive data may leak out.
How does AI affect NIS2 compliance?
AI risks should be included in your risk management under NIS2. AI-driven attacks are a threat to address. And your own AI usage can create vulnerabilities if not controlled.
Should we ban AI?
No, it's neither practical nor desirable. Instead: define acceptable use, choose approved tools, train staff, and monitor. Controlled usage is better than prohibition.
Which AI tools are safe?
It depends on the use case and data. Evaluate each tool: where is data stored? Is it used for training? What security controls exist? Enterprise versions are often safer.