1

I am working on a state-of-the-art quantification of security, meaning a numerical assessment of security for a system.

In my research, most of the work is not recent (up to 2012 so far) and is really theoretical. Most of the work is done with semi Markov processes, in order to state the probability of a system to become unavailable / leak information / etc. This deviates from the safety analysis of a system, where you check the probability of your system to reach some good / bad states.

The idea behind that is to take an attack graph and check the probability of each security layer to fail until the attacked asset is reached.

I was wondering if anyone here had a clue on what are the methods actually used? Do we just do a vulnerability analysis for each system component? (I doubt that, as it can be qualified as a subjective analysis, and the final security value can highly depend on the method used for the analysis)

edit: In order to make things more clear, I will indicate what is my whole subject (its for a PhD thesis I have just started), where my previous question fits.

I have a system (more or less secure) and I want to generate all the attack paths to a function F from a set of entry points E. Once this is done, I want to evaluate each of the attack paths previously obtained and state if the security of a path is enough for the system owner / designer. If the security is not satisfying, I have to add layers of security in the path (which could be anything like firewall, HSM, TPM, crypto ...), in order to meet the security constraints and to minimize the impact on cost and performances.

From the above problem, there is a need of security quantification to be able to state if the system meets the owner requirements and to express the security as a constraint in the optimization problem. I am not found of using the risk analysis, mainly because there is too many methods available (ETSI, EVITA, EBIOS ...), some using the ISO27005 values, some not, and also because its static. Its a snapshot of a system at time t, with values that can be quite subjectives.

What I like in probabilities is that you can work on the component state itself, without considering some subjective values. However, there is still a problem for the modelization: how do you compute the probability to go from a good state to a bad state ? Does it depend of the attack scenario ? As Schroeder said below, is it still good for a dynamic system ?

The most cited paper in the field come from B.B. Madan: A method for modeling and quantifying the security attributes of intrusion tolerant systems, which has been financed by DARPA, Space and Naval Warfare and NASA and a lot of work use this as a basis.

So far, Markov model looks to be the most used in the field, with fuzzy logic for CPS and sometimes Petri nets. But I don't know if some companies are actually using these works and if it is of any interest in term of feasibility.

Ecterion
  • 103
  • 7
  • Trying to measure security as a level is in most cases pointless and dangerous. There is no such thing as "more secure". Only secure and not secure. If you could measure something as being "more likely" to fail that means you have identified problems - which means you know it isn't secure. – Hector Nov 08 '17 at 16:26
  • I don't understand your point. There is no "secure" or "not secure", its not as binary as this. If it was so simple, thing like risks / vulnerabilities / norms or w/e would not even exist. How do you qualify something "secure" and "not secure" however ? Someone will always be able to break into your system, sooner or later. You cannot say "yes, I am 100% secure", its only mathematically possible, not practically. I do agree that risk analysis is quite subjective and not really meaningful. What I am trying to do is being able to quantify this, in order to give some security countermeasures. – Ecterion Nov 08 '17 at 16:33
  • This sounds like a real challenge. Although a lot of breaches are due to human failure, bad design/configuration, and unpatched software, there are also 0-day breaches, directed attacks, and compromises of the underlying technology. I don't really know how you'd make a risk score for the unknown. I suppose you could look historically and get an idea of how often underlying systems are compromised and extrapolate that so say "there's an X percentage chance of an underlying system being compromised; we don't know which one but believe it will happen." – baldPrussian Nov 08 '17 at 16:51
  • Secure in this context means not known to be not secure. Vulnerabilities solely exist due to human error. How can you quantify someone making a mistake? You can attempt to evaluate development and deployment processes. But pretty much anything else is usually snake-oil marketing. Any truly quantifiable risk means you have identified a problem that should be fixed rather than risk accessed. – Hector Nov 08 '17 at 16:55
  • 2
    I have done a lot of work in this area. You cannot perform quantitative analysis of probabilities in socio-technical systems. Even less so when the systems are dynamic (every patch resets all your calculations). NASA and the US Navy came to this conclusion a while ago. – schroeder Nov 08 '17 at 18:04
  • I'm a little confused about what your *question* here is. "what are the methods actually used?" You state what methods you found. Vulnerability analysis on sub-components? Yes, that's also done. As stated, your question is far, far too broad to answer. – schroeder Nov 08 '17 at 18:09
  • The question should be rephrased a little. Yes, security risks are quantifiable and are a combination of each system component determinations + combination of them determinations + other non-component risks. – Overmind Nov 09 '17 at 06:39
  • Thanks for your comments. Schroeder, do you have some information you could share with me on this topic ? Even some recent papers (2015-2017) are still on this topic, using fuzzy logic in order to estimate a security level. This state-of-the-art has the objective to reduce the scope of my question, I can't ask right now something narrower. I have edited the question in order to present my global subject, maybe you could give me some pointers as you seem to know the topic more than I do. – Ecterion Nov 09 '17 at 08:43
  • There are a bunch of realities that you will crash into. First, no company does this sort of analysis. This analysis is currently in the DARPA and theoretical realm. We hope this might result in something actionable, but it is just not done. Second, you will find that you will have to eliminate the sociographic element to the problem so that you can deal with repeatable probability models, else you will create models that run in recursive loops as operators circumvent safe states to get the system to a state that meets their operational needs at the time. – schroeder Nov 09 '17 at 10:25
  • But once you do that, your model is purely academic and will not help to advance the topic in any practical application. Third, you must account for the dynamic relationships between the human attacker and the human operator, which introduces a level of complexity that is difficult to account for. Fourth, attack vectors are as dynamic as the humans testing them, which makes using historical data of limited use. – schroeder Nov 09 '17 at 10:30
  • Fifth, data systems are, in reality, in constant flux. Not just from patching and fundamential changes to the systems themselves, but the configurations, infrastructure, mitigations, and even the human operators change. – schroeder Nov 09 '17 at 10:31
  • So, from my perspective, any work in this area must, foundationally, determine how it is to account for the human elements, both the attackers and the operators. There is a lot of work done on this is in "Safety and Accident" probability models. I do not know of anyone who has ported them over for use in information security applications, though. – schroeder Nov 09 '17 at 10:39

0 Answers0