2

I want to do a formal cost-benefit analysis of security measures - and hence am looking for hard statistics about the probability / frequency of desktop machines being compromised (not recommendations of how to avoid this). I appreciate that this will vary massively depending on how well managed a particular estate is, what security controls are in place, and impact will vary massively depending on the nature of the attack, but currently I have nothing to go on apart from vendor propaganda.

My googling only turns up thousands of articles containing the same advice about how to avoid being compromised.

While this is exactly the question asked here I don't see an answer there.

Anders
  • 64,406
  • 24
  • 178
  • 215
symcbean
  • 18,278
  • 39
  • 73
  • If there were a formula, similar to Drake's Equation, containing many factors that estimates the overall probability of a system being compromised, one could calculate the known factors and assign best-effort values to unknown ones to hopefully get in the ballpark. – postoronnim Mar 13 '20 at 21:10
  • 1
    @postoronnim The Drake equation is a pretty good analogy. Primarily because it's famous for having a large number of the variables being completely unknown, and no "best guess" exists for them. That's why you get answers ranging from 1 civilization in the galaxy, to millions. It's completely useless as a model. – Steve Sether Mar 23 '20 at 13:53

2 Answers2

1

There is no "hard statistics" on desktop compromise. There is only case study and research into limited contexts where compromise has occurred. But that data is not useful for making inferences in other contexts.

Case studies and "vendor propaganda" study a limited set of systems and compare the historical results. That's great if your system is in the same context, have the same users, threats, processes, etc. etc. They don't tend to include systems in similar contexts that have a different result because that just gets confusing as a narrative.

There's a reason why you can't Google this and get an authoritative data set: the number of factors involved are numerous, interrelated (one factor can affect another), and there is often no linear causal relationship between the factors and the outcome. It's a "Complex Problem" (also known as a "wicked problem").


Probability and cost/benefit analyses depend on a stable system. The problem with information security is that stable digital systems are unusual. Digital systems get more complex with every update, patches alter how systems function, the system context can change, people use and maintain the system, and the external threats to the system are sentient and adaptive. Each of those factors changes the underlying basis for a calculation.

There are ways to get closer to a probability calculation using formal probability approaches. And you will want to read How To Measure Anything in Cybersecurity Risk for that. But the author takes the approach that systems and their context do not change enough to worry about deviations. I debate that as a universal premise. It's true in some contexts, just not any that I have worked in.

If you constrain your analyses to the stable form of a system, then you can end up with a calculation for a system state and context that no longer exists. Fancy answers that are functionally useless as a decision support for what to do next.

To do the analysis that you want to do:

  1. Analyse your systems in context to identify those factors that are linear with known and limited cause/effect contexts. What's linear for you might not be linear in another org (and what's linear for you might change over time). You then work out probabilities for those things.
  2. Then you need to identify the non-linear contexts, the Complex contexts, and get as many people who know the systems and contexts together to get their opinions. For Complex contexts, you can't use linear calculations of probabilities: you need perspectives. Then you track those opinions over time.

There is a very real possibility that you will not be able to perform a cost/benefit forecast analysis, only a retrospective one. Knowing that is very important.


To learn more about this problem, look up:

  • Dr Nancy Leveson from MIT - she asserts there is a complexity threshold in social-technical systems where impact and likelihood is inherently unpredictable
  • Dr David Snowden - creator of the Cynefin Framework around making sense of contexts that are a mix of the linear and non-linear (and then what to do about it)
  • WEF's "Towards Quantification of Cyber Risk" - promoting the "Value-at-Risk" approach, which is a nice way of averaging out the historical data to make some useful inferences
  • I have also done work in this area for a few years. My first slide deck after I asked this same question six years ago.

The slide deck explains how several frameworks approach this problem:

  • COSO
  • ISO 31000/ ISO 27005
  • NIST 800-39
  • RISK IT
  • FAIR
  • OCTAVE Allegro

The US Government Accountability Office (GAO) has this to say about the problem:

“Reliably assessing information security risks can be more difficult than assessing other types of risks, because the data on the likelihood and costs associated with information security risk factors are often more limited and because risk factors are constantly changing.”

“Even if precise information were available, it would soon be out of date due to fast-paced changes in technology and factors such as improvements in tools available to would-be intruders.”

schroeder
  • 123,438
  • 55
  • 284
  • 319
0

I think this is an interesting question and I agree with the previous comment above.

I also feel you're focusing on the wrong aspects, you can't get a rate of occurrence of everything since there's too much to account for.

I'd recommend looking at your network, identifying its weaknesses and then evaluating the risk factors, the likely hood of that particular risk occurring, then calculating the ARO - Annual rate of occurrence.

You can cross reference the above with a compiled list of actual past incidents to help provide you with a road map of where you currently are, how often these incidents occur, what risks you face and what you need to do to address them.

That's a very basic start of what I believe you'll need to think about. It seems like you're wanting to create an Incident Response plan, one standard I've been reading through for my company is: NIST 800-61 - the ISO Standards are great but are hidden behind a paywall. Where'as the NIST 800-61 provides a very good foundation for Incident response.

NIST 800-61: https://csrc.nist.gov/publications/detail/sp/800-61/rev-2/final

Hope this helps!

  • Knowing what is a realistic ARO is **exactly the problem I am trying to solve**! – symcbean Mar 15 '20 at 16:06
  • 1
    @symcbean as I said, you can calculate the ARO ***of the past*** but you will never have enough information to be able to forecast the future ARO unless the systems and the people using them are static. You are asking to predict the number of sunny days next September, with the understanding that any small miscalculation can mean massive impact to your org. You are looking for the wrong thing. – schroeder Mar 23 '20 at 12:06