10

Many approaches exist to define security requirements. To keep it simple, i would say to define a security requirement, one need to model the threat encountered when building up misuse cases for specific use cases being worked out. Still, at the end, some security requirements are at architectural level while others are at code level.

Most of what I can think of as security requirements at any of these levels seem to have test cases (whether automated or not). Still, in some examples: like the need to stop an intentional back door, for me, it is worth being formulated in a security requirement.

  1. I can't think of a test case for it though! intentional is pretty difficult to proof using a test case! Thus my question: isn't this worth being a security requirement?
  2. and now to the generalized version of the question: Would not having a test case for a security requirement be considered to be an indicator that I have an improper security requirement?
AviD
  • 72,138
  • 22
  • 136
  • 218
Phoenician-Eagle
  • 2,167
  • 16
  • 21

6 Answers6

5

Absolutely not. Security is in general not something that can be verified through testing. As Dijkstra once wrote, "Program testing can be used to show the presence of bugs, but never to show their absence!" This is especially true for computer security.

There are a whole bunch of misconceptions embedded in the idea that requirements need to be testable:

  • Wishful thinking. Gee, wouldn't it be nice if all of the requirements I have were easily checkable using some black-box tests? Wouldn't it be nice if I had a pony? Sorry, that's not how reality works. Suck it up and deal with it.

    I'll put it another way. Have you heard the story of the late-night drunk who lost his keys while leaving the bar on his way to his car? Bystander asks him why he is looking under the lamppost 100 ft away, and the drunk says "that's where the light is". People get like that. We have to remember that requirements analysis is not about what's easy to check. Rather, it's about what we need. It is about the customer's needs. That's why they are called requirements, not achievements.

  • Misplaced expectations. Most developers spend most of their life building features. If you want to check that the feature is present and appears to work, testing is a great way to do that. Consequently, many developers get used to thinking of testing as the way you go about all kinds of quality assurance tasks. But that's narrow-minded, blinkered thinking.

    Security is not a feature. Features are about ensuring that good things happen when the user asks for them to. Security is about ensuring that bad things never happen when the user wasn't expecting them. You can't verify security with feature testing.

    Put another way, security is about ensuring there are no surprises, when there is a malicious adversary out there poking at your system in unexpected ways. Testing is about poking at the system in expected ways, and consequently doesn't tell us very much about what happens when the system gets poked at by an adversary.

  • Thinking that testing is all there is to quality assurance. Ensuring that a system will be secure is in some ways a quality assurance problem. When it comes to security assurance, testing often isn't the right tool for the job.

    Security demands different methods. Some of the methods that are most appropriate to checking whether a system is secure and meets its security requirements tend to involve: threat analysis, architectural security analysis, security code review, application of static analysis tools, fuzzing, and penetration testing. Folks who aren't familiar with the security domain tend to be unaware of these methods, or don't realize the important role they play in evaluating the security of a complex software system.

    Gary McGraw likes to call penetration testing a "badness-ometer". It's a meter with two extremes: on one end, "Your Security is Really Bad", and on the other hand you get "Don't Know". It can never tell you your security is good; it can only deliver bad news, but the absence of bad news does not mean you have good security.

There are some security requirements that can be verified through testing: primarily, "security features". But security is about a lot more than just security features. Software assurance is fundamentally different from security features.

Example: One plausible requirement is "the system should contain a way to reset my password". That's a feature, and one that can be reasonably checked with testing. On the other hand, another plausible requirement is "the password reset feature should not endanger security". That's a security assurance requirement, and no way you're going to be able to check it with testing.

So, no, security requirements aren't necessarily testable. There are plenty of security requirements that cannot be easily checked for compliance using testing; other methods are needed.

P.S. Last word: You must read Bruce Schneier's essay on security assurance. It's exactly on target. Schneier points out that we often think about security the wrong way: we start out by assuming the system is secure, until proven wrong. Schneier points out that this is backwards. He suggests that if you want your system to be secure, a better way is to start by assuming it is insecure, until proven otherwise. (And, I'll interject, testing ain't gonna be terribly useful at proving a system is secure. Just a fact of life)

D.W.
  • 98,420
  • 30
  • 267
  • 572
5

There are two kinds of security requirements:

  • Security features (e.g. passwords should be hashed)
  • Secure features (e.g. no SQL Injection)

Security features should absolutely have an exact test case, these are things that can be tested for.

On the other hand, the "secure features" requirements are effectively demanding a negative: No injection, No XSS, No overflows, No backdoors...
While these can be tested to prove the requirements are NOT met, you cannot prove that they are. "You can't test security in..."
Much like any other non-functional requirement, a single test case is not sufficient to prove the requirement. For example up-time requirement: "The system will support 2000 simultaneous users, without crashing for 1 week". Can you prove this won't happen? No, you can test against it, and if it fails you'll know it, but you cannot be positive a different set of users won't cause it to crash, in different circumstances.

Better would be an additional requirement (as @beth said) to have a review, and the requirements should be around the reviews itself: who can do it, what types of reviews, what needs to be fixed, etc.

The important part is to be clear what kind of requirement you're defining: a negative, "assurance" type of secure feature requirement, or a positive security feature.
The latter can have regular test cases, the former can be checked via different means. If you're not sure which type of requirement it is, you probably need to break it down further.
(e.g. "Passwords must be secure" should translate to: "passwords should be stored as salted hash", "hash should be cryptographically secure algorithm", "Strong ACLs on the data store", "Do not expose passwords in email or web page", etc etc. Some of these are functional features, some of them are non-functional secure requirements. Some can be tested via test case, some via pentest, and some via code review. Etc.)

AviD
  • 72,138
  • 22
  • 136
  • 218
  • 2
    Nice overview. I take from this that all (security and secure feature) security requirements need to be mapped to, at least, a test case. Depending on whether it is security or secure, the test case shall be designed to proof that the requirements are NOT met or that they are fully met. – Phoenician-Eagle Apr 22 '11 at 19:49
  • @Paul, that's a good way of putting it... And yet, there will always be some types that cant have any simplistic test case attached to it, and will simply need a "must be reviewed". – AviD Apr 24 '11 at 08:35
3

I'd say - "yes, they need to be testable" and also "if you have an untestable requirement, you need to rewrite". But I work for DoD contracts, and my gut reaction is as much about a Pavlovian reflex to being beaten up by untestable requirements as it is based in any rational thought.

I've often seen high level requirements that are untestable. And I think the "no intentional back door" requirement could be such a case. But you need to drill down to some requirements that aim at prevention measures, such as:

  • the system shall be reviewed for backdoors by trusted external security verification agents
  • reviews shall be conducted before the deployment of every major release
  • reviews shall include ....

I'm not sure how your business uses requirements other than for testing... in mine, failure to meet agreed upon customer requirements can be a cause for contract violation, so we're careful not to let any impossible negatives into the world that would mean massive scope creap.

bethlakshmi
  • 11,606
  • 1
  • 27
  • 58
  • fair enough, but even if I ask for outsourcing the test to external company, still how can I trust their test cases to discover what an intentional backdoor is... Just wondering! – Phoenician-Eagle Apr 21 '11 at 21:27
  • you could put an intentional backdoor in and see if they find it :-) - of course if the backdoor is in the compiler (http://cm.bell-labs.com/who/ken/trust.html) then you still have a problem. I still say that security requirements aren't testable in general however, you can only test for the presence of controls. – frankodwyer Apr 21 '11 at 21:32
  • @phoenician eagle - you have two problems with any external group - are they capable of doing the job? and are they honest? Resumes, interviews and sometimes professional certifications are a good indication of talent. In terms of ethics - often you can find bonded companies, in the defense world, clearances are sometimes used as trust points as well. You can get crazy with background research, too. Admittedly, there's no perfect answer. It comes down to risk - how much time and money will you spend to get you what degree of confidence? – bethlakshmi Apr 26 '11 at 13:46
  • @frankodwyer - yes, but I think in design you have to boil down your big picture requirements that aren't testable into a set of controls that are testable, with requirements on those controls. In that sense, I'd treat security requirements like user requirements, a starting point for testable requirements. – bethlakshmi Apr 26 '11 at 13:48
3

In the DSA and ECDSA signature algorithms, when generating a signature, there is a value k which must be chosen randomly and uniformly in a given range. The uniform randomness is very important for security (biases can be exploited into key recovery attacks) so the relevant standards (e.g. X9.62 for ECDSA) spell out uniformity as an absolute requirement. But it is also, by nature, untestable... and that's a problem.

The solution used by X9.62 is that there are also "Approved" (with an uppercase) pseudo-random number generators which can be used to produce k -- but this does not solve the issue since those Approved RNG must be seeded with an "entropy string" and the actual entropy is not testable.

So while having an untestable security requirement is something to avoid, sometimes you cannot do without.

Edit: and yet you can, in the specific case of DSA and ECDSA, turn the signature algorithm into another algorithm which is backward compatible, but deterministic and thus testable. See RFC 6979. This does not invalidate the main point, which is that testability cannot always be obtained; but it is a very worthwhile characteristic, which justifies quite some effort if it can be attained at all.

Thomas Pornin
  • 320,799
  • 57
  • 780
  • 949
  • nice way of defending your statement :) – Phoenician-Eagle Apr 21 '11 at 23:01
  • while I agree that randomness cannot be tested for, in the sense that you cannot prove it, there are still useful statistical unit tests you can do, i.e. the generator should at least *look* uniform. If it doesn't, that's most likely a bug. For example if the generator or entropy source suddenly starts producing the string "hey, nice hat!" over and over, well a pure random source could in principle do that by chance but it's more likely that something is wrong :-) – frankodwyer Apr 22 '11 at 08:06
  • 1
    Heh, this reminds me of that Dilbert cartoon, where he meets the random troll: "Nine. Nine. Nine. Nine." Dilbert: "How can you tell that its really random?" Troll: "That's the problem with random, you can never be sure." – AviD Apr 22 '11 at 13:44
  • but isnt the randomness algorithm reviewed? Even if the output isnt actually tested, that doesnt mean the *randomness* isnt tested, just not via an enduser testcase... – AviD Apr 24 '11 at 08:36
1

Security is a quality attribute rather than a functional attribute, and so you can't generally test for it.

What you can test for, or at least should be able to, is the presence of a control that was specified (and if the control involves code, you can test its functionality).

For example let's say that your control against an intentional back door is a code review (without wanting to open a debate as to whether that is a good or a bad or a sufficient control for this) - you can test that the review happened.

frankodwyer
  • 1,907
  • 12
  • 13
  • 1
    I tend to disagree with the statement "you can't generally test for it". You need to have a test case that at least shows that for that test case the requirement is fulfilled. Now why do I tend to add such an untestable requirement, because if otherwise I would end up with a huge list of very fine grained requirements... In the example you provided: testing that the review happened is like making the list pf presence in a classroom versus going for an exam! – Phoenician-Eagle Apr 21 '11 at 21:33
  • 1
    by 'you can't generally test for it (security)' I mean that there are many important security requirements for which no test exists. Example: I require some of my data to be confidential but even I cannot tell just by looking at my data whether it is or not. – frankodwyer Apr 21 '11 at 21:36
  • I understand what you are saying but this is somehow my confusion! As you might know, security requirements are typically not intended to be understood, nor used by a single person and so such requirements are tough to swallow :-( – Phoenician-Eagle Apr 21 '11 at 21:38
  • 2
    I think as Beth says you need to drill down to get things that are unambiguous and that you can test for - however I would call these things controls (all of beth's examples are controls, and you can test whether they are there or not). Besides, it's not even the case that all *requirements* are testable, never mind security requirements. For example the requirement that the software functions correctly - there's no test for that. You can prove it doesn't work but you can't prove it does. – frankodwyer Apr 21 '11 at 21:46
1

One of my focus areas is on the applicability of security testing, and for the majority of cases I recommend testing is to confirm the security status meets the expected status.

By that I mean you have defined security in the design, you have reviewed code and scanned the application and you (hopefully) close off all issues prior to go-live, and the penetration testing is the confirmation that the previous steps worked. So from that viewpoint it does include the test cases.

Exceptions obviously include:

  • testing as a new 0-day comes out, as the original model will not have had that 0-day as a test case
  • testing against a new threat model, if the threat environment changes, again the test case may not have been in looked at previously
Rory Alsop
  • 61,367
  • 12
  • 115
  • 320