Risk Management

Risk Analysis in Practice: Five Mistakes That Make Your Risk Matrix Meaningless

Everyone talks about risk assessments. Few discuss why most fail to provide real decision support. Here are five mistakes to avoid.

  1. 70%
    of organizations lack formal risk management processes
    Industry Report
  2. NIS2
    NIS2 Article 21 requires risk-based security measures
    NIS2 Directive
  3. ISO
    ISO 27005 recommends separation of inherent risk and control assessment
    ISO 27005:2022

Everyone does risk assessments. Few do them right.

Risk analysis is the cornerstone of the NIS2 Directive, ISO 27001, and virtually every security framework that exists. Everyone knows it should be done. But after seeing hundreds of risk matrices, I can conclude: most provide an accurate picture of completely wrong things.

The problem isn’t that organizations skip risk analysis. The problem is they do it in a way that doesn’t provide real decision support. The result is a document that looks professional in a binder but never influences a single decision.

Here are the five most common mistakes — and how to avoid them.

Mistake 1: Copying others’ risk registers

It’s tempting to start with a template. Someone else’s risk register, an industry standard’s example risks, or a consultancy’s predefined list. The problem? Others’ risks aren’t your risks.

A manufacturing organization and a SaaS provider have fundamentally different risk landscapes. Even within the same industry, risks vary depending on size, IT environment, customer structure, and maturity.

Right approach: Start with your own information flows. What information do you handle? Where is it stored? Who has access? Which processes depend on it? Your risks arise in the gap between information value and threats that can exploit weaknesses in how you handle it.

Mistake 2: Mixing inherent risk and control effectiveness

This is the most widespread error. The organization assesses a risk as “low” — but it’s only low because there’s already a control in place. What happens if the control stops working? If the firewall rule changes, if the backup service ceases, if the experienced employee leaves?

When inherent risk and control effectiveness are mixed in the same assessment, you lose the ability to understand what the controls actually contribute. You can’t see where you’re most vulnerable if a control fails.

Always separate the assessment into two steps:

  1. Inherent risk — How serious is the risk without controls? This gives you a picture of the underlying threat.
  2. Control effectiveness — How well do existing controls reduce the risk? This shows where your investments actually make a difference.

The difference gives you residual risk — the risk you actually live with today. This is what should be compared against your risk appetite.

Mistake 3: False precision with multidimensional scales

A 5×5 matrix with probability and consequence gives 25 possible levels. It looks precise. But if assessors can’t distinguish between “probability 3” and “probability 4” consistently, you’ve just added noise to the analysis.

Worse: many organizations use scales where dimensions aren’t independent. “High probability and high consequence” automatically becomes “critical risk” — but what if probability is high precisely because consequence is low (and the organization therefore hasn’t prioritized protection)?

What works better:

  • Use fewer levels (3×3 often suffices)
  • Define each level with concrete examples relevant to your operations
  • Calibrate by assessing several risks together before running individually
  • Accept that risk assessment is a qualified estimate, not an exact science

Mistake 4: Risk without connection to business impact

“The risk of unauthorized system access is assessed as medium.” Excellent. And what does that mean for the business? Nothing, if the assessment stops there.

The board doesn’t want to hear probability levels. They want to know: what does it cost if it happens? How long are we down? Which customers are affected? What regulatory consequences are triggered?

The connection between technical risk and business impact is what makes risk analysis a decision tool instead of an IT document.

Technical assessmentBusiness language
”High probability of ransomware""30% risk of 5-day production shutdown, estimated cost £2.5–6.5 million"
"Medium risk of data breach""Potential GDPR fine and loss of 2–3 key customers"
"Low risk of DDoS""Max 4 hours downtime, limited business impact”

Mistake 5: One-time exercise instead of living process

The most common time for a risk analysis? Just before an audit, certification, or supervisory inspection. Then it goes in a folder until next time.

A risk analysis not updated since it was done reflects a threat landscape that no longer exists. New systems are introduced, suppliers change, threat actors shift tactics, organizations restructure. A static risk analysis is like last month’s weather forecast — it was correct when made, but it drives no decisions today.

Update upon changes

New systems, suppliers, processes, or organizational changes should trigger a re-evaluation of affected risks. It doesn't need to be a complete overhaul — focus on what's changed.

Schedule regular reviews

At least annually, the entire risk register should be reviewed. Quarterly review of highest-priority risks provides even better governance.

Integrate with incident management

Every incident should lead to a re-evaluation of relevant risks. Incidents give you actual data about the threat landscape — use it.

Make it accessible

A risk register only the risk analyst can interpret drives no decisions. Make it comprehensible for those making decisions — management.

Risk analysis that drives decisions

A risk matrix isn’t a goal in itself. It’s a tool for making better decisions about where the organization should allocate its limited resources. If your risk analysis doesn’t change priorities, doesn’t influence budget decisions, and isn’t discussed at management level — then it serves no purpose.

Ask yourself: when was the last time a risk analysis result actually led to a change? If the answer is “never” or “I don’t know” — then it’s time to re-evaluate not just the risks, but the process.

Securapilot’s risk module is built on ISO 27005 and separates inherent risk from control assessment — so your risk picture actually reflects reality and gives you the decision support you need.


Frequently asked questions

Why don't risk matrices work?

Risk matrices can work — if used correctly. Problems arise when organizations copy generic risk registers, confuse risk levels with control status, or never update their assessment. A risk matrix should provide decision support, not just a colour-coded picture.

What's the difference between inherent risk and residual risk?

Inherent risk is the risk level before controls are applied. Residual risk is the risk that remains after controls are implemented. Separating these provides a clear picture of which controls actually make a difference.

How often should risk analysis be updated?

At least annually, but also during significant changes to operations, IT environment, or threat landscape. A risk analysis not updated for over a year likely doesn't reflect reality.

How do you connect risk analysis to business decisions?

Express risks in terms the board understands: potential financial impact, business interruption in hours/days, and regulatory consequences. Avoid technical jargon and probability percentages without context.


#risk analysis#risk matrix#risk management#ISO 27005#decision support#NIS2

We use anonymous statistics without cookies to improve the website. Read more