9

threat modeling would definitely help in determining the test cases and where to fuzz but my question is specific to code scanning. Would threat modeling help in focusing/prioritizing or any way static code analysis?

smiley
  • 1,214
  • 2
  • 13
  • 21

3 Answers3

7

In short, yes it would. But you could also use a simple triage approach if threat modelling is too much of an overhead.

In detail -

Code analysis can be a time consuming activity. Even tool lead approaches will generate large amounts of output that will require human review and prioritisation. As such a threat modelling approach can help to identify and triage that effort. This becomes especially important in large code base reviews.

From a threat modelling point of view I would break the code down in to the following areas (in order of importance):

  • Code that has unauthenticated access.
  • Code that is authenticated but widely accessible.
  • Code that is authenticated and restricted to a defined and trusted minority (such as a limited subset of code used by admin function).

This will give you a prioritised approach to the code review and a framework for assigning the appropriate level of risk to each finding, for example - a flaw within code handling user input from an unauthenticated space could be seen to present a higher priority to fix over that of the same type of issue within a limited and trusted area of the application.

As requested this is now combined with the reply from @this.josh (this is an edited version, read @this.josh reply for the full text, it's worth the read)

If you are looking to do a more thorough job then you need to consider exposure:

  • Is some code run more often?

  • Is some code easier to pass data into?

  • Is some code more easier to debug to see the effects of?

  • What resources are accessed by the code?

  • What is the code protecting? (the assets)

  • What code relies on hardware for protection?

  • What code relies on other code to protect it? (what is your protection model?)

  • What code provides fault tolerance or error recovery?

Considering threats, exposures, assets, and the protection model will help you make good choices about what to test and how to test it.

David Stubley
  • 2,886
  • 1
  • 17
  • 28
6

Definatly maybe.

Methods and techniques have value when they assist you in acomplishing a task either more completely or with fewer resources than another approach.

So are you looking to do the job thoroughly or more efficiently?

Threat modeling may help you come up with test cases that anticipate a practical attack and ignore test cases, and thus code, which a threat would not (or could not) attempt to exploit. This in in contrast with fuzzing input for most or all of accessible code. Of course it makes little sense to check code that an attacker could not access.

If you are looking to do a more thorough job then you need to consider exposure:

Is some code run more often?

Is some code easier to pass data into?

Is some code more easier to debug to see the effects of?

And then as @David Stubley notes, you need to consider value of the code and what the code protects or provides access to.

What resources are accessed by the code?

What is the code protecting? (the assets)

What code relies on hardware for protection?

What code relies on other code to protect it? (what is your protection model?)

What code provides fault tollerance or error recovery?

Considering threats, exposures, assets, and the protection model will help you make good choices about what to test and how to test it.

this.josh
  • 8,843
  • 2
  • 29
  • 51
  • your answer and the one of David are complementary. THx for both. Would it be possible to combine them so i accept the answer? – smiley Jan 31 '12 at 08:58
0

this.josh's part

Definatly maybe.

Methods and techniques have value when they assist you in acomplishing a task either more completely or with fewer resources than another approach.

So are you looking to do the job thoroughly or more efficiently?

Threat modeling may help you come up with test cases that anticipate a practical attack and ignore test cases, and thus code, which a threat would not (or could not) attempt to exploit. This in in contrast with fuzzing input for most or all of accessible code. Of course it makes little sense to check code that an attacker could not access.

If you are looking to do a more thorough job then you need to consider exposure:

Is some code run more often?

Is some code easier to pass data into?

Is some code more easier to debug to see the effects of?

And then as @David Stubley notes, you need to consider value of the code and what the code protects or provides access to.

What resources are accessed by the code?

What is the code protecting? (the assets)

What code relies on hardware for protection?

What code relies on other code to protect it? (what is your protection model?)

What code provides fault tollerance or error recovery?

Considering threats, exposures, assets, and the protection model will help you make good choices about what to test and how to test it.

this.josh
  • 8,843
  • 2
  • 29
  • 51