44

I know that one shouldn't rely on "obscurity" for their security. For example, choosing a non-standard port is not really security, but it also doesn't usually hurt to do so (and may help mitigate some of the most trivial attacks).

Hashing and encryption relies on strong randomization and secret keys. RSA, for instance, relies on the secrecy of d, and by extension, p, q, and ϕ(N). Since those have to be kept secret, isn't all encryption (and hashing, if you know the randomization vector) security through obscurity? If not, what is the difference between obscuring the secret sauce and just keeping the secret stuff secret? The reason we call (proper) encryption secure is because the math is irrefutable: it is computationally hard to, for instance, factor N to figure out p and q (as far as we know). But that's only true because p and q aren't known. They're basically obscured.

I've read The valid role of obscurity and At what point does something count as 'security through obscurity'?, and my question is different because I'm not asking about how obscurity is valid or when in the spectrum a scheme becomes obscure, but rather, I'm asking if hiding all our secret stuff isn't itself obscurity, even though we define our security to be achieved through such mechanisms. To clarify what I mean, the latter question's answers (excellent, by the way) seem to stop at "...they still need to crack the password" -- meaning that the password is still obscured from the attacker.

Matt
  • 3,192
  • 2
  • 21
  • 26
  • 4
    If you keep everything secret then you can only communicate with yourself... The point is not to rely on something that your adversary has a chance of finding out or to assume that simply because you're not making it publicly known it will not be discovered. – Guy Sirton Oct 20 '13 at 02:45
  • 1
    This one's been answered several times over, but understanding it is core to understanding what security actually is. – tylerl Oct 20 '13 at 06:31
  • 1
    Just another example of security through obscurity: http://stackoverflow.com/q/5217416/1149595 – kiss my armpit Oct 20 '13 at 11:44
  • 10
    One aspect of separating secrecy of algorithm from secrecy of the key is that you don't have to kill those who designed your security system to keep your secret a secret. – Lie Ryan Oct 20 '13 at 17:01
  • 1
    See also [What is the actual difference between security through obscurity and true encryption?](http://crypto.stackexchange.com/q/3540/2663) – Tobias Kienzler Oct 21 '13 at 08:28

11 Answers11

63

See this answer.

The main point is that we make a sharp distinction between obscurity and secrecy; if we must narrow the difference down to a single property, then that must be measurability. Is secret that which is not known to outsiders, and we know how much it is unknown to these outsiders. For instance, a 128-bit symmetric key is a sequence of 128 bits, such that all 2128 possible sequences would stand an equal probability of being used, so the attacker trying to guess such a key needs to try out, on average, at least 2127 of them before hitting the right one. That's quantitative. We can do math, add figures, and compute attack cost.

The same would apply to a RSA private key. The maths are more complex because the most effective known methods rely on integer factorization and the involved algorithms are not as easy to quantify as brute force on a symmetric key (there are a lot of details on RAM usage and parallelism or lack thereof). But that's still secrecy.

By contrast, an obscure algorithm is "secret" only as long as the attacker does not work out the algorithm details, and that depends on a lot of factors: accessibility to hardware implementing the algorithm, skills at reverse-engineering, and smartness. We do not have a useful way to measure how smart someone can be. So a secret algorithm cannot be "secret". We have another term for that, and that's "obscure".

We want to do security through secrecy because security is risk management: we accept the overhead of using a security system because we can measure how much it costs us to use it, and how much it reduces the risk of successful attacks, and we can then balance the costs to take an informed decision. This may work only because we can put numbers on risks of successful attacks, and this can be done only with secrecy, not with obscurity.

Tom Leek
  • 168,808
  • 28
  • 337
  • 475
  • 3
    How can you "measure" the mathematical probability of a "secret" becoming an "obscurity", especially in the age of highly experimental (e.g. quantum) computers. At some point, your 128bit key might be quicker cracked than the port on which your "here's my password" service runs can be guessed (since each network connection might take longer to make than the processing time required to break your key). Just because that's probably not true today does not place your idea of a "secret" squarely into a different class than what you call "obscurity". They are both exactly in the same continuum. – orokusaki Oct 20 '13 at 02:13
  • 1
    Whether or not there is an obscurity element is subjective. If you use a AES with a 256 bit key, the situation acquires a security through obscurity element if you choose to *believe* that you are getting extra protection, above that furnished by your 256 bit key, from the fact that your attacker does not know that you are using AES. The slogan *security through obscurity* is basically a way of expressing criticism of such a belief. – Kaz Oct 20 '13 at 08:47
  • I *love* security.stackexchange.com, and this answer is one of the many reasons. – MattBianco Aug 11 '17 at 14:07
  • Even if you can quantify certain aspects of certain straight forward attacks, [some attacks](https://xkcd.com/538/) can be hard to quantify. – Muhd Jun 05 '18 at 04:58
15

I think that the term "security through obscurity" gets misused quite often.

The most frequently referred to quote when talking about security through obscurity is Kerckhoffs's principle.

It must not be required to be secret, and it must be able to fall into the hands of the enemy without inconvenience;

Security through obscurity is referring to relying on keeping the design and implementation of a security system secure by hiding the details from an attacker. This isn't very reliable as systems and protocols can be reverse engineered and taken apart given enough time. Also, a system that relies on hiding it's implementation cannot depend on experts examining it for weaknesses, which probably leads to more security flaws than a system that has been examined, has the bugs made known and fixed.

Take RSA for example. Everyone in the world knows how it works. Well, everyone that has a good grasp of the mathematics involved anyhow. It is well studied and relies on difficult mathematical problems. However, given what we know about the mathematics involved, it is secure provided the values of p and q are kept a secret. This is essentially concentrating the work of breaking (and protecting) the system into one secret that can be protected.

Compare this with an encryption algorithm that does not subscribe to Kerckhoffs's principle. Instead of using a publicly known scheme that uses a secret key, this encryption algorithm is secret. Anyone that knows the algorithm can decrypt any data encrypted with the algorithm. This is very difficult to secure as the algorithm will be nearly impossible to keep out of the hands of an enemy. See the Enigma machine for a good example of this.

  • 2
    The interesting thing is that supposedly the Enigma's mechanical PRNG was *known* to contain a significant bias (namely, it never produced a zero, so cipher-bytes never enciphered to itself), but the Nazis did not fix it, instead just messing up the wires (implementation) once in a while, hoping to thwart reverse-engineering. Of course that didn't work. – ithisa Oct 19 '13 at 19:14
  • @ithisa But it was close to working. The four rotor Enigma machine would have taken 20-30 days to crack, which would have made a crack useless. (Un)fortunately they used the same first three wheel setting as for ordinary three rotor Enigmas, so only 26 settings for the fourth wheel were needed once the ordinary Enigma was cracked in a day. – gnasher729 Dec 24 '19 at 13:31
7

The critical difference is in what is kept secret.

Take RSA as an example. The core principle of RSA is simple mathematics. Anyone with a little mathematical knowledge can figure out how RSA works functionally (the math is almost half a millenium old). It takes more imagination and experience to figure out how you could leverage that for security, but it has been done independently at least twice (by Rivest, Shamir and Adleman, and a few years before by Clifford Cocks). If you design something like RSA and keep it secret, there's a good chance that someone else will be clever enough to figure it out.

On the other hand, a private key is generated at random. When done correctly, random generation ensures that it is impossible to reconstruct the secret with humanly available computing power. No amount of cleverness will allow anyone to reconstruct a secret string of random bits, because that string has no structure to intuit.

Cryptographic algorithms are invented out of cleverness, with largely-shared goals (protect some data, implement the algorithm inexpensively, …). There's a good chance that clever people will converge onto the same algorithm. On the other hand, random strings of secret bits are plentiful, and by definition people won't come up with the same random string¹. So if you design your own algorithm, there's a good chance that your neighbor will design the same. And if you share your algorithm with your buddy and then want to communicate privately from him, you'll need a new algorithm. But if you generate a secret key, it'll be distinct from your neighbor's, and your buddy's. There's definitely potential value in keeping a random key secret, which is not the case for keeping an algorithm secret.

A secondary point about key secrecy is that it can be measured. With a good random generator, if you generate a random n-bit string and keep it secret, there is a probability of 1/2^n that someone else will find it in one try. If you design an algorithm, the risk that someone else will figure it out cannot be measured.

RSA private keys aren't a simple random string — they do have some structure, being a pair of prime numbers. However the amount of entropy — the number of possible RSA keys of a certain size — is large enough to make one practically unguessable. (As for RSA keys being practically impossible to reconstruct from a public key and a bunch of plaintexts and ciphertexts, that's something we can't prove mathematically, but we believe to be the case because lots of clever people have tried and failed. But that's another story.)

Of course this generalizes to any cryptographic algorithm. Keep random strings secret. Publish clever designs.

This isn't to say that everything should be made public except for the small part that's a random bunch of bits. Kerckhoff's principle doesn't say that — it says that the security of the design should not rely on the secrecy of the design. While cryptographic algorithms are best published (and you should wait a decade or so before using them to see if enough people have failed to break them), there are other security measures that are best kept secret, in particular security measures that require active probing to figure out. For example, some firewall rules can fall into this category; however a firewall that doesn't offer protection against an attacker who knows the rules would be useless, since eventually someone will figure them out.

¹ While this is not true mathematically speaking, you literally can bet on it.

Gilles 'SO- stop being evil'
  • 50,912
  • 13
  • 120
  • 179
7

Security is all about keeping secrets, but good security lies in knowing which secrets you can keep, and which you cannot.

And in particular, the best security protocols are built around the principle of factoring the secret out of the design, so that your secret can be kept without having to keep the design secret as well. This is particularly important because system designs are notoriously impossible to keep secret. This is the core of Kerckhoffs's principle, which goes back to the design of old military encryption machines.

In other words, if you algorithm is your secret, then anyone who sees an implementation of your algorithm -- anyone who has your hardware, anyone who has your software, anyone who uses your service -- has seen your secret. The algorithm is a terrible place to put your secrets, because algorithms are so easy to examine. Plus, secrets embedded into designs can't be changed without changing your implementation. You're stuck with the same secret forever.

But if your machine doesn't need to be kept secret, if you've designed your system such that the secret is independent of the machine -- some secret key or password -- then your system will remain secure even after the device is examined by your enemies, hackers, customers, etc. This way you can focus your attention in protecting just the password, while remaining confident that your system can't be broken without it.

tylerl
  • 82,225
  • 25
  • 148
  • 226
3

Security through obscurity generally refers to Kerchoff's principle, which states that the system must be secure even if everything except the key is common knowledge. This doesn't just apply to cryptography, or generation of ciphertexts. It also means you shouldn't count on generation of URL's, passwords, hashes, memory addresses, and even system architecture to be secret. The reason is that something in the system has to be secret, and isolating the secret portion to just one large number makes it a lot easier to use.

The reasons for protecting the key, not the algorithm, is the greater chance of the algorithm getting leaked, a greater number of possible keys than possible algorithms, and the greater cost of a leak. The algorithm is very easy to leak. Supposedly classified algorithms and hardware secrets are leaked every day, for example by:

  • A hacker stealing your source code
  • Making your algorithm available for pentesting or researchers to review.
  • Reverse-engineering of your algorithm by anyone who obtains the software/hardware
  • Reverse-engineering of your algorithm by anyone who also uses the algorithm
  • Disgruntled or malicious former employees leaking the algorithm
  • Brute force guessing since it's usually a lot easier to guess an insecure algorithm than to guess a 256-bit key.

Revealing your algorithm to researchers presents a Catch-22. You can't argue that a secret method is secure without revealing it, at which point it is no longer secret or secure. That's why we segregate the secret part of our algorithms. You can show that using your method doesn't reveal a secret key.

A leak or rekey is also very expensive when the algorithm is part of the secret. To stay ahead of attackers you have to redesign and update every usage of the algorithm with something brand new. You may even have to do change out the "secret" every few months if you're operating a very secure system. It's much easier to replace the key in a secure algorithm than to replace the whole algorithm, especially when hardware, backwards compatibility, or pushing updates to clients is involved.

The idea here is that every secure system has a secret. Whenever generating something you don't want a hacker to be able to reverse or guess without knowing a secret, it's good engineering to:

  1. Make the secret knowledge easy to change or replace.
  2. Make sure the secret itself is hard to deduce from inputs and outputs
  3. Make sure the secret is complex enough that it can't be guessed

If I build a box that takes in one number and spits out another (or takes in a seed and spits out "random" values, etc.), then someone builds an identical box, I now have to change my box. If all I have to change on my box is changing a 256-bit number, it saves me a lot of time and effort. Similarly, if I want to sell these boxes every single one has to be different. Changing the algorithm for each box you sell, instead of changing a random key for each box, would be ridiculously bad design.

Finally, it's worth understanding that security through obscurity and "roll your own crypto" are frequently found together. Don't roll your own crypto. By secretly changing your crypto, the the gain in secrecy is compromised by a loss in security. By rolling your own crypto you're very possibly making your system trillions or even 2^(big number) times cheaper to crack, and I guarantee an attacker won't take a trillion guesses to discover how you rolled your own system.

Cody P
  • 1,148
  • 6
  • 14
2

Security through obscurity means that the security hinges on the algorithm being kept secret.

For example, if I decide to use rot13 for my encryption, the security of the system relies on me making sure nobody else knows the algorithm I'm using. Finally, the onus is on me to determine how crackable the algorithm is.

One major issue is that I cannot distribute this system, because anyone can just reverse engineer the algorithm and use it to break my encryption. Besides, if the code is compromised, so is everything else.

A protocol that relies on security by obscurity will usually be eventually crackable by analyzing the output as well. (Of course, one can engineer algorithms that aren't prone to this — the easiest thing to do is to take the RSA algorithm and hardcode keypairs, trivially creating a "security by obscurity" algorithm.) One shouldn't trust that the algorithm won't eventually be guessed.

On the other hand, if I use RSA for encryption, I can have each instance generate its own keypair, and thus the program can be distributed without fear. I can protect the keys from being compromised by special hardware devices that contain the key and can encrypt messages but do not have the ability to spit out the key. Also, being a publicly known protocol, many, many people have analysed the protocol for security holes; I can trust that the encrypted messages cannot be cracked. We know that the keys can't be guessed because probability is on our side.

This is not security by obscurity. This is security through secrecy. The algorithm is public, there is just a "secret" (the key) which is kept secret.

Manishearth
  • 8,237
  • 5
  • 34
  • 56
1

As others have said, obscurity really refers to implementation.

Let me give an example. Suppose you and your partner have a pound of gold nuggets in your house which you wish to secure, but disagree as to how...

One of you happens to have a genie, which you can order to incinerate anyone that gets within 5 feet without giving a password.

One of you wants to hide the nuggets in the drain pipe of the kitchen sink, saying that the drain will still work and nobody would ever think to look there. This method has been used with all past partners and has not failed yet.

Both the password and the location are "secret" but the location relies upon them not looking there, while the password, even if you reuse it everywhere, relies upon them not knowing it.

jmoreno
  • 496
  • 2
  • 9
  • 1
    // , Where can I find stories about genies like this? Alladin, go home! – Nathan Basanese Jun 15 '15 at 04:31
  • // , Too late: Bartimaeus Trilogy is a thing. I am going to use this genie s*** to teach the little chickadees about encryption, now. Great way to extend the metaphor of Clarke's Third Law, @jmoreno. – Nathan Basanese Jun 15 '15 at 04:39
  • 1
    A bigger issue arises if one has a repeated need to make the gold available to a new partner but no longer have it available to any of the old ones. One can select from a nearly-infinite pool of arbitrarily-complex passwords for the genie, but there are by comparison only a small number of hiding places within the dwelling. – supercat Dec 02 '15 at 21:12
1

Take a sliding scale from 1 to 10, 10 being "IEEE compliant security" and 1 being "knife-proof vault made from layers of flannel". "Security through obscurity" is a phrase used to describe a security plan wherein the value is between 1 and 8, depending on who you ask and the day of the week, as well as the current estimated level of coronal mass ejection from Betelgeuse.

In other words, it's the result of an entirely subjective measurement of how close to standard a security policy is, since nobody can possibly know the difference between the risks, during a zero-day exploit, to a standard vs a non-standard security plan.

orokusaki
  • 1,342
  • 2
  • 10
  • 13
  • // , Plus one for Betelgeuse. Can you give an example? This doesn't seem to give any examples of security through other means than obscurity, though. – Nathan Basanese Jun 15 '15 at 04:32
  • @NathanBasanese one example would be a man standing at the gate with a spiked bat, trained to hit anyone that attempts to enter. – orokusaki Oct 13 '17 at 02:34
1

I think security through obscurity can be viewed in this manner:

You have a door that unlocks by turning the door knob a full clockwise direction instead of anti-clockwise. You provide a keyhole on the door knob to obscure the fact that it doesn't require a key to unlock.

Translated to information technology, I think this is akin to implementing a security feature in an unusual manner to throw any attacker off track. For example, masquerading a web server as IIS when it is in fact Nginx.

Security through obscurity is not necessarily bad security. The key lies in its implementation. That is, whether you are able to execute it consistently without tripping yourself up due to this non-conventional feature that you have implemented.

Question Overflow
  • 5,220
  • 6
  • 27
  • 48
0

A lot of text, for a great question.

Allow me to simplify the answer to this with an analogy. Obscurity can be defined in a lot of ways, all in accordance with on another. 'Something difficult to comprehend has obscurity'.

Observe a door, I will suggest that are 3 ways to secure a door. 1. Hide the handle (obscurity, no security) 2. Lock it with a key (security) 3. Hide the handle, and lock it with a key (security and obscurity)

(You could also hide the key hole or lock, as far my analogy is concerned)

What matter is even thought one knows the door needs a key, we do not know which one, that i secrecy or security. Everyone knows a handle on door is used to open it, hiding the handle is just obscurity.

Combined, those approaches are actually more secure than they are by themselves.

  • This doesn't really clarify why having a key isn't obscurity since if someone knows the shape of the key, they could make another one, just like if they knew where the handle was, they could use it. It gets close to the answer, but doesn't really make it all the way there. – AJ Henderson Aug 11 '17 at 14:28
0

Obscurity deals with how things are secure, rather than what information is needed to gain access. In the case of something like changing ports, the port to use is the actual means of securing and is also easily obtained by watching behavior. When using a closed source algorithm and relying on the closed source nature to make it hard to figure out, the security is the same for everyone using the system. If you break it once, you break it for everyone because you are leaving an attack on the entire system up to obscurity.

For something like a password, it's a key. Yes, that key is an "obscure" secret, but knowing any particular password doesn't break the system, it breaks the user. The security of the system works perfectly even when known. It still succeeds at only allowing in users who posses that knowledge and each user is able to use a different secret or change their secret to allow access.

So the difference is if you are dealing with secrecy of the method or secrecy of the key. If the method requires secrecy to be secure, it breaks, in it's entirety, as soon as the method is compromised. If the method does not require secrecy, but only information for a particular usage of it, then it provides security as the scope of a compromise is limited to a 1:1 relationship between secret and thing being accessed.

Effectively, if you can map the secret to a particular thing being protected and the protection of that secret is sufficient if you were to substitute the thing it protects, then the system is secure enough.

AJ Henderson
  • 41,816
  • 5
  • 63
  • 110