8

I've asked a few questions relating to schemes for various security-related functions, and posited schemes to accomplish those goals. In the responses, I see a conflict between two fundamental principles of IT security; "defense in depth" (make an attacker break not one, but many layers of information security) and "Complexity is the enemy of security" (the KISS principle).

The question IMO is obvious; when does one of these two take priority over the other? You can't have both in the extreme; adding layers necessarily increases complexity of the system, while simplifying security typically "thins" it. There's a spectrum in between "ideal" complete simplicity and "ideal" infinite depth, and thus a balance to be struck. So, which of these, when they conflict, is generally to be preferred in the design of an information security scheme?

KeithS
  • 6,678
  • 1
  • 22
  • 38

3 Answers3

5

This is written from the perspective of a software developer and project manager, who often needs to deal with sensitive data in apps that I am involved in creating.

Defense In Depth is not necessarily at odds with the principle of simplicity. Simplicity is difficult, and it's not what you'd think. Simplicity doesn't necessarily mean lack of effort.

It can (and in this case, I think it usually means) not using complex, home-grown mechanisms when established practices/tools exist.

It can also mean keeping your systems simple to reduce the attack surface area, or make them less attractive targets by not storing the type of data that people want to steal.

There is a true art, as well as science in finding the right balance of simplicity vs. functionality, but in my experience, when it comes to simplicity, simplicity in some aspects is one of the keys to defense-in-depth.

The essence of what I'm going to try to get at here is that the amount of time you spend defending your application should be proportional to the sensitivity of the data contained within, and the size of the attack surface area.

Here are two ways that simplicity adds to defense in depth by subtracting something.

  1. Carefully deciding whether you should store sensitive data in the first place
    • One of the most blindingly obvious truths about protecting sensitive data is that if you don't have sensitive data, you don't need to protect it.
    • When developing systems/software, the most basic thing you can do to increase relative the security of your system is to do so by limiting the sensitive data in the first place.
    • By deciding not to store unnecessary sensitive data, you are making it simpler, but exercising defense-in-depth by considering defense as a part of your initial requirements gathering phase
  2. Reducing the attack surface area
    • Similar to the above, but this time looking at features. Every single user input control - text-box, drop-down list, etc, is a potential "window" that, if unprotected, can be an entry point for an attacker. If you're failing to validate input or failing to sanitize output, that control may be vulnerable to any number of well-known attacks. As a developer, I can tell you that when creating large, complex sites/forms, it's easy to miss a validation control here or validate something incorrectly there.
    • The business might want a fancy interface that has all sorts of bells and whistles, but the benefit of having that interface/feature should be weighed against the cost of securing it, and the downside of the increased attack surface area.

Also, keeping security simple doesn't necessarily mean limiting your defenses to fewer defenses. Keeping security simple simply means don't make it harder than necessary. Ways you can keep security simple and still have defense-in-depth include:

  • Having secure defaults. Establish your normal defenses. You can have fifty layers of defenses, but if you know what your baseline is, you are still keeping it simple by just following the normal routine.
  • Using established best practices. Similar to the above, for almost every type of I.T. activity, there is already an established set of best practices. Simply following those instead of coming up with wild schemes of your own keeps things simple.
  • Using established, trusted tools. Of course, no tool is foolproof, but if you're adding layers of defense, using established tools instead of coming up with your own or using lesser-known, unsupported tools keeps things simpler in the long-run. You'll have better documentation, a wider user community for support, and greater likelihood that when there's a problem, it'll get patched quickly.

Defense-in-depth is about layers of security. Keeping each layer as simple as possible is the key to applying the principle of keeing it simple to a defense-in-depth strategy.

David Stratton
  • 2,646
  • 2
  • 20
  • 36
3

"Defense in depth" is usually pushed out of a feeling of paranoia. You implement layers upon layers of defense in response to panicky rants from upper management.

"Low complexity" is usually promoted in order to reduce costs and delivery times. You reduce complexity so as to meet the deadlines imposed from upper management.

Often, "upper management" will insist on both aspects at the same time. It is then your job to notify them of how irksome and pesky details like the Laws of Physics may prevent or slow down the simultaneous fulfillment of both goals.

Ultimately, this is not a technical decision; this is about economics. Risk analysis ought to put a price on intrusions, and thus assess how much money defense in depth may save in the long run, by containing damage resulting from a successful attack. Similarly, increased development costs and delays will be given their own financial estimate. It is up to the decision-makers to balance these costs as it best suits their strategy; the important point being that this is a matter of policy, not of technology.

Thomas Pornin
  • 320,799
  • 57
  • 780
  • 949
  • I don't disagree with what you're saying, but that first paragraph is funny to me, simply because it's at odds with my personal experience. I'd like to run into one person in upper management that actually gives a hoot about security or even knows what "Defense-in-depth" means. Most of the times, they figure it's "an I.T. job" to worry about security - they just want to know how to get tasks done/make a profit. Educating them to care and make decisions that don't increase risk is usually the challenge I face. But +1 because I agree with the conclusion 100%. – David Stratton Jan 17 '13 at 21:53
  • Well, I _have_ encountered the paranoiac type. People who would rant for hours on how dangerous it can be to put a Web server on a Windows server which is protected by _only_ a specific DMZ and two layers of firewalls (and that server contained only public data, not even a list of user names or whatever). – Thomas Pornin Jan 17 '13 at 22:03
  • Just in case it was misinterpreted, I wasn't arguing anything you said, just commenting that I wish I'd run into some senior management that made security a priority. ;-) – David Stratton Jan 17 '13 at 22:10
1

Defense in depth and simplicity are not contradictory as long as each layer is simple and independent. Each layer should not be dependent on other layers or it really isn't good defense in depth. (Since it really is just one complex layer at that point.) If each layer can be worked with separately and is implemented in a simple, verifiable and maintainable way, then there is really nothing at odds.

AJ Henderson
  • 41,816
  • 5
  • 63
  • 110