I am working on a state-of-the-art quantification of security, meaning a numerical assessment of security for a system.
In my research, most of the work is not recent (up to 2012 so far) and is really theoretical. Most of the work is done with semi Markov processes, in order to state the probability of a system to become unavailable / leak information / etc. This deviates from the safety analysis of a system, where you check the probability of your system to reach some good / bad states.
The idea behind that is to take an attack graph and check the probability of each security layer to fail until the attacked asset is reached.
I was wondering if anyone here had a clue on what are the methods actually used? Do we just do a vulnerability analysis for each system component? (I doubt that, as it can be qualified as a subjective analysis, and the final security value can highly depend on the method used for the analysis)
edit: In order to make things more clear, I will indicate what is my whole subject (its for a PhD thesis I have just started), where my previous question fits.
I have a system (more or less secure) and I want to generate all the attack paths to a function F from a set of entry points E. Once this is done, I want to evaluate each of the attack paths previously obtained and state if the security of a path is enough for the system owner / designer. If the security is not satisfying, I have to add layers of security in the path (which could be anything like firewall, HSM, TPM, crypto ...), in order to meet the security constraints and to minimize the impact on cost and performances.
From the above problem, there is a need of security quantification to be able to state if the system meets the owner requirements and to express the security as a constraint in the optimization problem. I am not found of using the risk analysis, mainly because there is too many methods available (ETSI, EVITA, EBIOS ...), some using the ISO27005 values, some not, and also because its static. Its a snapshot of a system at time t, with values that can be quite subjectives.
What I like in probabilities is that you can work on the component state itself, without considering some subjective values. However, there is still a problem for the modelization: how do you compute the probability to go from a good state to a bad state ? Does it depend of the attack scenario ? As Schroeder said below, is it still good for a dynamic system ?
The most cited paper in the field come from B.B. Madan: A method for modeling and quantifying the security attributes of intrusion tolerant systems, which has been financed by DARPA, Space and Naval Warfare and NASA and a lot of work use this as a basis.
So far, Markov model looks to be the most used in the field, with fuzzy logic for CPS and sometimes Petri nets. But I don't know if some companies are actually using these works and if it is of any interest in term of feasibility.