2

I don't really want to get into the specifics.

I found a bug from a huge software provider. As any other responsible user, I have reported the bug to the software provider.

It is hard to tell if this bug would be considered a security risk or not, as it doesn't directly compromise security. However, I could most certainly see this as a vector to "dig deeper".

Regardless of my findings or interpretations of the bug, it got me thinking: How does one systematically determine if a software bug should be labeled as a "security flaw"?

schroeder
  • 123,438
  • 55
  • 284
  • 319
Gavin Youker
  • 1,270
  • 1
  • 11
  • 23
  • The obvious answer would be: does it impact security? If you use the CIA triad, you can apply a model for defining what 'security' means. – schroeder Jan 11 '17 at 08:29

2 Answers2

2

The methodical approach to this is to go through a standard software security framework such as OWASP Software Assurance Maturity Model (OWASP SAMM Project) or BSSIM or any other other standard Software Security Maturity model. You need to properly identify the things such as Threat agents, Attack vectors, Security weakness, Security controls, Technical impacts and Business impacts of your identified bug in terms of security.

Then you need to model it using a Threat model framework and rank your identified bug. There are numerous threat models such as CVSS , DREAD, STRIDE. Finally you need to identify the ROI of the effort you need to put to resolve this issue. The business would use the values returned by your model analysis and give a go or no go to fix the issue.

user3496510
  • 1,257
  • 2
  • 12
  • 26
  • 1
    There is an online tool to calculate a CVSS score https://www.first.org/cvss/calculator/3.0 You could use this score as part of determining if your issue presents a security risk. – iainpb Jan 11 '17 at 09:09
0

I would risk, that this question couldn't be answered. The topic is too broad, and unspecified.

A "security flaw" is first of all domain and application specific. A "security flaw" in one application may be considered a "feature" in another in a totally unrelated domain (e.g., unauthenticated shutdown of a system is bad online on the stock exchange, good on an oil rig if the pump is out of control).

Then we don't really know how to classify bugs in the place :)

A good solution could be a risk evaluation based on risk assessments which in turn is based on your specification. For this, you would need specialists ("the cybersecurity guys") involved in the project itself. It would be advisable to involve them from the offering/early planning stage, so they can identify potential pitfalls all the way as you go with development. Sadly, nowadays security is always added during the later/latest steps and it almost always involves unnecessary overhead because of this.

The bad thing about security is: good security is invisible, thus hard to market/sell; missing security on the other hand is most probably devastatingly bad.

And I close with an anecdote from undisclosed sources (allegedly originates from Amazon): the head of the IT Security department says at the yearly evaluation meeting: "We didn't have any incidents last year, thus we would like to double our budget for the next year."

D. Kovács
  • 109
  • 2