1

I'm a beginner and reading about attitudes towards "security by obscurity." I understand that there are varying degrees of vehemence in the opposition to the use of obscurity, but I am trying to clarify for myself how absolute this is.

I understand that relying exclusively on obscurity is pretty unanimously frowned upon. I'm only discussing types of obscurity which could be added to in-depth defense strategies.

For example, things like configuring hosts to not respond to ICMP echo requests (in cases where that makes sense) or sanitizing banner info in order to obscure the topology or software of my network seem like no-cost practices that make it one step harder for a non-determined attacker to target my network.

Is there some line of delineation at which everyone agrees that obscurity is a good idea, or is there something I am missing in which even these types of obscurity would be discouraged? Is there perhaps a different term or category for these types of obscurity?

  • Are you sure your question isn't already covered among the many questions under the [tag:obscurity] tag? E.g. [At what point does something count as 'security through obscurity'?](https://security.stackexchange.com/questions/32064/at-what-point-does-something-count-as-security-through-obscurity), [The valid role of obscurity](https://security.stackexchange.com/questions/2430/the-valid-role-of-obscurity), [Isn't all security “through obscurity”?](https://security.stackexchange.com/questions/44094/isnt-all-security-through-obscurity) – Arminius Jan 28 '18 at 22:09

2 Answers2

3

The fundamental thing to understand is Kerckhoff's principle; the enemy knows the system. This means that you have to work on the basis that obscurity has failed and the attacker knows everything about how your system is meant to work. So, as you've concluded, there must be no reliance on obscurity.

However, you're under no obligation to help the enemy know your system. So sure, the low cost methods you mention are worth doing. You're not relying on them, but they don't hurt and might help.

Graham Hill
  • 15,394
  • 37
  • 62
  • Just a nitpick, but Kerckhoff's principle is specific to cryptosystems. A more general idea for information security would be the very similar Shannon's maxim. – forest Jan 28 '18 at 22:52
0

The principle is unclear to you, because you haven't defined it.

One good definition is by NIST, in a slightly outdated publication though: "System security should not depend on the secrecy of the implementation or its components". Two major points here are "depend" and "implementation". Take your own example: if the security of a network depends somehow on all the network topology being top secret, it's a bad design. But as long as there are other dependencies (firewalls, patch management, encryption, and so on), and the secrecy of the topology is not a requirement but simply a suggestion for better strength, this is fine.

Moreover, even that rule may have exceptions. One example I can easily come up with are CAPTCHA challenges which are nowadays secure only as long as the attacker doesn't have an access to the source code of a challenge generator implementation. You're free, of course, to call it a bad design as well, however, we currently just don't have a better solution than CAPTCHA for a long list of problems. It doesn't mean, nevertheless, that you may simply ignore the principle; any discrepancy with it must be thoroughly justified.

ximaera
  • 3,395
  • 8
  • 23
  • I wouldn't be so sure that having the source code of the generator would easily allow an attacker to beat it. If I start with a string of letters and put it through a transform, I know the string of letters, so I'll know if the correct string is entered or not. Just knowing how the transform works does not make it easy to reverse it anymore than knowing how a large semiprime is made (integer multiplication) allows it to be factored. The real problem is that computers are closing in on humans with pattern recognition accuracy. "Just easy enough for humans" may also be easy enough for computers. – forest Jan 28 '18 at 22:55
  • @forest you don't need to reverse a transform. You just generate images for a **very long** list of words. The list with corresponding images forms the training data set. Next you use this data set to train a neural network to correctly recognize text on images, close to how a human will do, until the accuracy will be good enough. The author of [the article I've linked to](https://medium.com/@ageitgey/how-to-break-a-captcha-system-in-15-minutes-with-machine-learning-dbebb035a710) reached something close to 100%. In real world, even 20% to 50% accuracy is good enough for a lot of purposes. – ximaera Jan 28 '18 at 23:02
  • Oh I get what you mean. Yeah using it to train neural networks certainly improves accuracy, to the point where real-world implementations can be broken. – forest Jan 28 '18 at 23:09