44

That security through obscurity is A Bad Thing is received wisdom and dogma in information security. Telling people why something is to be avoided can be considerably more difficult when there is no line delineating what you are trying to ban from apparently effective strategies.

For example - running ssh on a non-default port and port knocking are both suggested as ways of improving ssh security and both are criticised as being ineffective security through obscurity.

In this particular case both solutions reduce the visibility of the system to automated attempts. This does nothing to improve the effectiveness of ssh as a tool or reduce the need for other ssh security measures. It does provide a way of separating serious attempts from automated passers by though, which improves the manageability of the system.

  • Besides manageability/effectiveness what distinctions describe the boundary between valid/invalid uses of obscurity?
  • What analogies describe effective use of obscurity or draw the distinction between this and ineffective use?
  • Which analogies apparently supporting the effectiveness of obscurity don't hold up and why?
  • What other specific implementations are examples of the valid role of obscurity?
Bell
  • 975
  • 9
  • 12

3 Answers3

41

Interesting question. My thoughts on this are that obscuring information is helpful to security in many cases as it can force an attacker to generate more "noise" which can be detected.

Where obscurity is a "bad thing" can be where the defender is relying on that obscurity as a critical control, and without that obscurity, the control fails.

So in addition to the one you gave above, an effective use of obscurity could be removing software name and version information from Internet facing services. The advantages of this are:

  • If an attacker wants to find out if a vulnerable version of the service is in use they will have to make multiple queries (eg. looking for default files, or perhaps testing timing responses to some queries). This traffic is more likely to show up in IDS logs than a single request which returned the version. Additionally fingerprinting protocols aren't well developed for all services, so it could actually slow the attacker down considerably
  • The other benefit is that the version number will not be indexed by services like Shodan. This can be relevant where an automated attack is carried out for all instances of a particular version of a service (eg. where a 0-day has been discovered for that version). Hiding this from the banner, may actually prevent a given instance of the service from falling prey to that attack.

That said, it shouldn't ever be the only line of defense. In the above example, the service should still be hardened and patched to help maintain its security.

Where I think that obscurity fails is where it's relied on. Things like hard-coded passwords that aren't changed, obfuscating secrets with "home grown encryption", or basing a risk decision on whether to patch a service on the idea that no-one will attack it. So the kind of idea that no one will find/know/attack this generally fails, possibly because the defenders are limiting their concept of who a valid attacker might be. It's all very well saying that an unmotivated external attacker may not take the time to unravel an obscure control, but if the attacker turns out to be a disgruntled ex-employee, that hard-coded password could cause some serious problems.

forest
  • 64,616
  • 20
  • 206
  • 257
Rory McCune
  • 60,923
  • 14
  • 136
  • 217
  • 1
    I've often heard admins say something akin to "This system is so old, no one even knows to look for it anymore." when attempting to justify not patching. Nice discussion OtherRory. – Scott Pack Mar 07 '11 at 00:25
  • 16
    Good answer. The phrase "security through obscurity is bad" needs to be extended to "security through obscurity is bad if it's the only security you have". By all means make the attacker's job harder by obscuring details, but you should also assume that the attacker knows all the details (except well-defined secrets like passwords (which must be changeable)) and examine your security in that context. – Cameron Skinner Mar 07 '11 at 02:38
  • 9
    Well said, and well commented @Cameron. The well-known principle is misstated as "Avoid all security via obscurity", instead of "Don't rely only on security via obscurity". In fact I would encourage it, since it can limit your potential attackers. True, youre limited to the dangerous, knowledgeable ones, but you can focus on them instead of dealing with all the noise from script kiddies and the like. "First make it secure, *then* make it obscure." – AviD Mar 07 '11 at 07:05
  • From my personal experience, what Rory mentioned is quite interesting "could be removing software name and version information from Internet facing services" this is information required mainly by the support organization. Typically developers adds it and it ends up in google. Nevertheless, I can tell that for customers and support find this as extremely valuable information and request even more the Patch level... So my recommendation was to keep these info based on the need to know principle. – Phoenician-Eagle Mar 07 '11 at 14:11
  • 2
    @AviD: **"First make it secure, *then* make it obscure."** I like this a lot and I'll remember it. Of course, if you advertise your system widely before you get to the second step, you'll have a hard time "making it obscure." :) – Wildcard Jan 15 '16 at 22:50
  • @AviD Is that a quote from somewhere? I googled those words and this thread was the only result. – Y     e     z Jan 28 '18 at 22:01
  • @AviD The real problem is that the people who design "obscure systems" often have no clue how an attacker is going to respond to them. I have seen countless examples of people who are absolutely sure that their homegrown boobytrap will catch any attacker, when in reality they would be instantly noticed by even the minimally-educated security professionals. A _real_ professional's version of obscurity involves things like highly-obfuscated, anti-debugging executables that exploit odd CPU behavior to run. _That_ is actually useful, and may require an expert reverse engineer to break. – forest Jan 28 '18 at 23:03
16

You've mischaracterized the conventional wisdom. The conventional wisdom doesn't say that obscurity is bad. It says that relying upon security through obscurity is bad: it usually leads to fragile or insecure systems. Do note the difference. Obscurity might add some additional security, but you should not rely upon it, and it shouldn't be your primary defense. You should be prepared that the obscurity might be pierced, and be confident that you have adequate defenses to handle that case.

An important concept here is Kerckhoff's principle. Back in the 1800's, Kerckhoff already articulated the reasons why we should be skeptical about security through obscurity, and how to draw a line between appropriate and inappropriate uses of cryptography. The Wikipedia article on Kerckhoff's principle is very good and an excellent starting point.

Here are some points to ponder:

  • As the Wikipedia article says, "The fewer and simpler the secrets that one must keep to ensure system security, the easier it is to maintain system security." Therefore, all else being equal, the less things we have to keep secret, the easier it may be to secure the system.

  • Generally speaking, there is little hope of keeping the design or algorithms used in the system secret from dedicated attackers. Therefore, any system whose security relies upon the secrecy of its design is, in the long run, doomed -- and in the short run, it is taking an unnecessary risk.

  • The worst kind of secret is one that cannot be changed if it is compromised or leaked to unauthorized parties. The best kind of secret is one that can be easily changed if it is suspected to be leaked. Building a system where security relies upon keeping the system's design secret is one of the worst possible uses of secrecy, because once the system is deployed, if its secret leaks, it is very hard to change (you have to replace all deployed copies of the system with a totally new implementation, which is usually extremely expensive). Building a system where security relies upon each user to select a random passphrase is better, because if the password is leaked (e.g., the user types their password into a phishing site and then says "Oops!"), it is relatively easy to change the user's password without inconveniencing others.

  • Or, as Kerckhoff wrote in the 1800's, the design of a cryptosystem must not be required to be secret, and it must be able to fall into the hands of the enemy without inconvenience. This is basically a restatement of my previous point, in a particular domain.

For these reasons, well-designed systems generally try to minimize the extent to which they rely upon secrets; and when secrets are necessary, one usually designs them to concentrate all the required secrecy into a cryptographic key or passphrase that can be easily changed if compromised.

D.W.
  • 98,420
  • 30
  • 267
  • 572
  • 2
    But as @Rory also stated, obscurity is not necessarily a *bad* thing, of itself - it's only bad when you *rely* on it for your security. **If** your system is secured properly, then there **are** situations (as was the original question) where it can help *just a little bit more*. Of course you shouldnt rely on it - and we'd have to be careful to avoid a false sense of security - but obscurity *does* have some value - consider it an element of minimizing attack surface (or a form thereof). "First make it secure - then make it obscure". (Gonna adopt that line...) – AviD Mar 09 '11 at 09:31
  • 1
    @AviD, I don't understand your comment. Why did you post it in response to my answer? I agree with everything you wrote, and I think my answer is completely consistent with what you wrote (I don't see anything you wrote that contradicts anything I wrote). If you meant to criticize my answer, I don't understand what specifically you are criticizing. – D.W. Mar 11 '11 at 06:36
  • Well, no, not directly contradicts anything specific. I was assuming, as you were quoting Kerckhoff's principle to a question of "when *is* obscurity appropriate", that you were saying that you should not have obscurity as a security control *at all*. Maybe I was just reading between the line... sorry if I misread you :) – AviD Mar 11 '11 at 07:43
  • 1
    @AviD, good point. No, that's not what I meant to imply -- my fault for not writing more clearly! Here was the point I was implicitly trying to make: to understand when security through obscurity is/isn't appropriate, first you need to understand the reasons why it is potentially dangerous and under what circumstances it is dangerous. Then in any concrete situation, you can check whether those reasons apply to your specific situation, and use that to decide whether security through obscurity has value. But I should have made that more explicit. Sorry about that. – D.W. Mar 13 '11 at 07:42
  • Ah, now I can agree with that. Although I think that might be a bit too... "hard" for typical programmers, to have as a guiding principle. So while I agree in theory, and can be helpful for security pros, I think @Rory's explanation (which basically comes down to the same thing) is "easier" for non-security-pro consumption... – AviD Mar 13 '11 at 08:47
5

It's security through obscurity that is the bad part. Obscurity can increase security, but you can't depend on obscurity alone to provide security.

Absolute mantras are always harmful. ;) It's essential to understand the reasoning behind the mantra and the tradeoff's involved.

For example, hiding a key outside your house when you are going for a run is security through obscurity, but it might be an acceptable risk if you are going to be back in 30 minutes (and aren't a high risk target?).

The same can be said for "never use goto." Sometimes goto is the best way to write clear code in certain situations. As an experienced professional, you need to understand the reasons for the guidelines, so you can understand the tradeoffs.

Bradley Kreider
  • 6,152
  • 2
  • 23
  • 36