4

I have been looking at risk assessments lately and I am looking for a way to practically estimate likelihood. Most people recommend assessing based on historical precedent which sounds great to me, however some risks have never materialized at my company but currently still seem pretty likely.

Does anyone have any advice on how to deal with this? Should I base the annual rate of occurrence on my professional judgement? (I prefer more reality based options tbh) use different likelihood scales for "the big risks"?

Mattey
  • 45
  • 4

2 Answers2

3

You've discovered the Big Question in cyber risk management. There is no easy answer. I've written entire papers on the subject.

The problem you have identified is to determine what data to use as a basis for a quantified likelihood calculation. The challenges you face are:

  • there is not enough data in your context
  • the foundation for the risk context is under constant change (technology is constantly changing, and new threats and vulnerabilities are constantly emerging, creating "perfect uncertainty" even if you had lots of data)

There are a few approaches to take:

  1. Use a Qualified approach based on expert opinion (and quantify a range of opinions over time for extra rigor)
  2. Use a sliding scale of impact and determine what quantified data you do have for each range of impact
  3. Use an adjacent context that does have enough data for you to use (partners, competitors, others "just like you")
  4. Replace "likelihood" with "ease" in your calculations
  5. Assume a longer time scope and assume that all events will happen eventually, which leaves you with assessing impact alone
  6. Combine the above approaches based on available relevant data and stakeholder requirements

As you can guess, I could go into a book's worth of exploration for each point. If you want a book, I can recommend How To Measure Anything in Cybersecurity Risk. I do not agree with everything in the book or the authors' conclusions or basis, but it is a great place to start in your thinking.


The one thing I caution you about is "false accuracy". Just because you have data and you can use a formula to generate a valid output does not mean that your conclusion is at all correct or can accurately represent likelihood. I see too many risk professionals fall into the trap of "But I made a fancy graph with lots of data points! It must be right!" All risk management is a guess and a bet against the future. And sometimes, spending too much time trying to get accurate about something that hasn't happened yet is just a waste of time.

schroeder
  • 123,438
  • 55
  • 284
  • 319
  • That is not a very satisfying conclusion but maybe it is unavoidable. Options 3 seems like a good one. I think upper management will find it very understandable. I guess this is also a place where those threat intel sharing organizations are useful. I will order the book! – Mattey Nov 25 '21 at 13:44
  • 1
    From my experience, if you start with #3, you will quickly add in #1, then a little of #2. Remind your stakeholders that cyber risk is very different from other types of business risk. Risk contexts change on a dime and in unpredictable ways. Cyber risk needs to be reassessed far more often than other risks. And it's the constant reassessment that you need to do to be able to adapt to what comes your way. – schroeder Nov 25 '21 at 13:53
  • To be honest i don't quite understand what you mean with option 2 yet. Could be a bit of a language barrier, could you explain it in more layman terms or give a (small) example? – Mattey Nov 25 '21 at 14:51
  • 1
    Most people assess the risk of an event *type*, like DDoS, not the different range of impacts of each event type. For instance, what's the risk of a DDoS that interrupts operations for 5 minutes? An hour? A Day? Each has a different likelihood and impact. – schroeder Nov 25 '21 at 14:56
  • 1
    At my workplace, the policy is to replace "likelyhood" with "ease of exploitation" with can be measured in time and expertise required to exploit. This allows to can take into account the threat level assumed in your threat analysis. – A. Hersean Nov 30 '21 at 09:14
1

You could try using a Capability-Motivation grid, for each of your identified threat actors.

If you use this approach, it's important to remember that Capability doesn't just refer to the attacker's skill. It's their estimated ability to successfully compromise the CIA of the system under assessment. So if your system has missing or ineffective controls, or one of your possible threat actors is a malicious privileged insider, the capability could be "Moderate" or "High" even if the actor's hacking skill is relatively low.

Gethin LW
  • 71
  • 3
  • I am not really at home in the terminology yet, so lemme do a quick check if i understand it. This would mean deriving a likelihood score from capability * motivation? An angry administrator would be motivated and capable, but i don't think it is very likely in our company but this would still result in a high likelihood. Would you remove the angry part and use motivation to estimate how motivated you admins are to sabotage your systems? Do you maybe have an example? – Mattey Dec 01 '21 at 10:27
  • 1
    Yes, capability*motivation in exactly the same way as probability*impact for a risk grid - this is a methodology from the old IS1&2 standard. It's not what I'd use by choice but it can help if you don't have anything else to base your assessment on. If you're defining the threat actor based on their motivation, then yes it shortcuts part of this analysis and results in a high risk rating. But maybe that's appropriate - controlling malicious privileged insiders is a significant issue for businesses which is why PIM solutions and separation of privileges exist (but are often poorly implemented). – Gethin LW Dec 03 '21 at 10:31
  • 1
    (above should be "capability x motivation" and "probability x impact" but got interpreted as markdown) – Gethin LW Dec 03 '21 at 10:38
  • 1
    Also bear in mind that malice is not the same as motivation. You could have a disgruntled insider that is hostile to your organisation, but nevertheless isn't motivated to attack because of other factors (eg. potential legal penalties, audits, friendship with colleagues, consideration of their reputation, etc.). I appreciate that this is hard to quantify, but humans are complicated. – Gethin LW Dec 03 '21 at 10:52