3

Thinking about software security metrics currently I've thought about the following software security metrics:

  • number/type of CWE detected by developers (bug reporting)
  • number/type of CWE detected by static analysis
  • number/type of warning at compile time (i.e: from stack protector / fortify source )
  • number/type of (presumed) memory leaks (running software under valgrind or whatever)
  • number/type of unsafe function calls (sprintf instead of sNprintf)

Now the questions:

  1. What other security metrics on software do you suggest ?
  2. Is there a state-of-the art reference on this topic ?

I was able to found only security metrics on IT but not on softwares (software development).

The goals is to measure and have an overview on how bad/good are the softwares developed and measure where increase/decrease effort on secure software development practices or how/where security process needs some changes.

AviD
  • 72,138
  • 22
  • 136
  • 218
boos
  • 1,066
  • 2
  • 10
  • 21
  • I think it's important to highlight that security metrics does not project how secure software is, rather it's shows how bad it is. Gary McGraw makes this point in his presentations. I also recommended looking at BSIMM – Manipulator Jun 11 '14 at 18:13
  • @Manipulator, in regards to what security metrics highlight it's just a point of view. Does BSIMM suggest some security metrics specific on secure software developments ? – boos Jun 17 '14 at 11:04

2 Answers2

5

Say all the metrics you listed in your question report zero. Does that mean your software is secure? Does finding 0 bugs mean there are no bugs?

The reason you're having a hard time finding software-only metrics is because software doesn't exist in a vacuum. Here's a question that's just as difficult: How much is a piece of software worth?

There are several questions I'd ask if someone just handed me a report with the metrics you listed here.

What security policies are your metrics being weighed against? As much as I despise the topic (BOOOOORING!!!!) security in any organization--and therefore its software--need to be based on sound security policy. And concentrating on software alone as you want to do in your question is sidestepping a piece it shouldn't be sidestepping. And please note that policies can determine how severe certain bugs are in comparison to each other.

That said:

OWASP has a presentation here that may interest you.

Warnings from the presentation: Software Security Metrics

  • Metrics are context sensitive and environment-dependent
  • Architecture dependent
  • Aggregation may not lead to strength

Here are a few metrics they list:

  • Size and complexity
  • Weakness/LOC (CWE)
  • Weakness (severity,type) over time (CVSSv2, CWE)
  • Cost per defect
  • Attack surface (# of interfaces)
  • Layers of security
  • Design Flaws (CWE)

And this paper, while not a one stop reference, does present a method of weighting specific kinds of security flaws, which could be used to develop a scoring system that you can apply to developers/applications.

And lastly, a very important statistic that I think you should consider are the number of false positives, both by testing tools AND by human testers.

And then there's CVSS.

boos
  • 1,066
  • 2
  • 10
  • 21
avgvstvs
  • 940
  • 1
  • 7
  • 19
  • 1
    Say that perfect security doesn't exist, , find a way to measure/have an idea on how secure/insecure all software are it's by the way useful to move from FUD (Fear, uncertainty and doubt) to a more numeric way to achieve more (not perfect but more than before) security driven by by number. Suppose you have 40 software with many weakness reported in it, from which software do you start to fix ? which weakness ? On the next release, where do you focalize your effort to prevent/ fix/ enforce the occurrence of less weakness/vulnerabilities? – boos Jun 17 '14 at 15:07
  • The last paper referenced gives you a tool to do exactly that kind of analysis. – avgvstvs Jun 17 '14 at 15:37
  • @Boos I added one more reference: CVSS – avgvstvs Jun 17 '14 at 15:48
  • I totally disagree with your introduction but at the end I've chosen your answer, as unique answer to the question. To allow you to understand better why metrics are important I suggest to you to read this book: http://www.amazon.com/Security-Metrics-Replacing-Uncertainty-Doubt/dp/0321349989 – boos Jul 22 '14 at 15:42
  • (Bought the book.) At any rate, I didn't mean to communicate that metrics weren't important, only that metrics by themselves are insufficient, which a novice security professional might not understand: Some come from a pure tech background and might not be prepared for the fact that security policy guides our interpretation of any metrics we collect which in turn guides our remediation priorities. – avgvstvs Jul 22 '14 at 20:57
  • In other words, if you have no policies your metrics have no solid footing. – avgvstvs Jul 22 '14 at 21:00
  • Ok, understood your point and with said purpose that make sense. it's always difficult to share with pure tech people what some numbers can communicate even if not 'tech founded'. At the moment I'm trying to push up my 'consultancy/managerial' sec skills. Do you have any suggestion about other books that are a MUST read on sec management ? – boos Jul 23 '14 at 09:49
  • IMHO most security mgmt books rehash ancient concepts that were enumerated by the US Military in the 1970s. I would start with the Ware report "Security Controls for Computer Systems" and the Anderson report, "Computer Security Technology Planning Study." Schell: "Computer Security: The Achilles heel of the Airforce" These papers detail the birth, implementation, and death of Multics. – avgvstvs Jul 23 '14 at 12:50
  • And if you're worried that these papers are just too aged, David Bell wrote "Looking Back at the Bell-La Padula Model" in 2005, and linked the problems in the 1970s to problems today. – avgvstvs Jul 23 '14 at 12:51
  • I've studied most of these when I've got the CISSP. With a great likelihood probably I've to move in Gary McGraw's books. – boos Jul 23 '14 at 13:27
3

Andy Ozment has proposed an interesting security metric for software in his paper Bug auctions: Vulnerability markets reconsidered (WEIS 2004). Roughly speaking, his idea is that a software vendor sets a prize for the next vulnerability to be found. The prize starts from a fixed amount (e.g. 100$) and grows every day (e.g. 10$/day). When a vulnerability hunter (white/black hat) found a new vulnerability, he is awarded the current amount of money and the prize is set back to the initial amount and grows again. Now after a while, the software vendor can use the current prize as the security metric for his software. So if the prize now is 10000$, it means that even under such incentive, hunters still cannot find a new vulnerability and thus the software is pretty secure. There are some economical analysis in the paper to show advantages and disadvantages of this idea.

The paper also has some citations related to software security metrics that might be helpful to you. They might be old though.

Thanks.

ZillGate
  • 354
  • 4
  • 11