20

I understand that it is important to use well known and well tested hashing algorithms instead of designing my own. For such there often are reference implementations available, which initialize the needed constants with manually picked random numbers.

When I use such implementations, does it improve security to pick custom constants? I would expect an attacker to use the most likely values when bruteforcing my hashes, which are those from the reference.

A strong cryptographic hashing algorithm shouldn't be breakable, not even with rainbow tables when using salting. So from a theoretic point, there shouldn't be much of a difference. However, I'm not an expert so I'd like to hear what you say.

danijar
  • 303
  • 2
  • 7
  • 4
    If changing those constants improved security, the designers wouldn't have made them constant. – Stephen Touset Aug 19 '14 at 06:45
  • 2
    Does asking this question mean you're implementing the functions yourself? On top of not designing your own algorithms you should use a well-known library for the implementation as well. – Fsmv Aug 19 '14 at 19:28

4 Answers4

30

No. The constants are part of what make the hash secure, and the constants in the specifications are what have been used in the cryptographic community's examinations of the hash functions that we currently believe are safe. It has been shown that intentionally badly chosen constants can break a hash function in subtle but exploitable ways, and coming up with your own constants could inadvertently leave you with a weak hash function as well.

Xander
  • 35,525
  • 27
  • 113
  • 141
  • Inversely they could strengthen your hash as well if you select correctly. –  Aug 19 '14 at 03:26
  • 22
    @caseyr547 And dousing yourself with gasoline each day might protect you from alien attack, but without independent verification that this is indeed the case, it can only be classified as a stupid thing to do. You're introducing substantial risk for a minimal chance of a reward that you won't ever be able to demonstrate you've actually gotten. – Xander Aug 19 '14 at 03:32
  • 1
    if you want to douse yourself in gasoline under the whitehouse in the safest place on earth or on the doomsday plane i don't think its going to hurt anything. –  Aug 19 '14 at 03:43
  • 7
    The [S-boxes of DES](http://crypto.stackexchange.com/questions/16/how-were-the-des-s-box-values-determined/42#42) are another example, where NSA supplied constants that at the time appeared to make no difference in terms of security but later were shown to strengthen the algorithm against some attacks. – Gilles 'SO- stop being evil' Aug 19 '14 at 07:12
  • So as an example: http://weblogs.asp.net/jongalloway//encrypting-passwords-in-a-net-app-config-file uses as entropy "Salt Is Not A Password". So what you're saying is that I should not replace that with a custom string? – Nzall Aug 19 '14 at 08:40
  • From the link in your answer: "To third party analysis, malicious SHA-1 [i.e. with modified constants] remains as strong as the original SHA-1: the backdoor is “undiscoverable“, it can only be exploited by the designer." – example Aug 19 '14 at 09:25
  • @NateKerkhofs Not exactly. That is an attempt (a failed attempt, but an attempt none-the-less) to salt the password. You *should* salt passwords with a globally unique salt, one for each password. The constants we're discussing here are internal to the hash function, and you won't see them in higher level APIs like the ones exposed by .Net. – Xander Aug 19 '14 at 13:07
  • @Xander Actually, that's not a hashing/salting function. That's an encryption function used to store the password for a 3rd party app in the settings file with the intent of retrieving them for later reauthentication. but since it's not about hashing, but about encryption and entropy, I think it's not quite related. Still, I wanted to ask it, because I've used that code as-is in a demo project and wasn't sure if I had to change that string. – Nzall Aug 19 '14 at 13:16
  • @NateKerkhofs Ah, ok, I didn't look closely enough. Yes, if I were you, I would change that string. It's probably not a huge deal given how it's used, but I'd generate a new string for each application that uses that code, and I'd replace with a truly random set of 16 bytes which you can get from the RNGCryptoServiceProvider in .Net. – Xander Aug 19 '14 at 13:25
  • @Xander A new string for each application? The program is a desktop WPF application, and it can only connect to a single instance of the 3rd party app at a time. I think generating a new string every time you save the password might be overkill. If the machine is compromised, there are easier ways to read the password than decrypting an app.config file. – Nzall Aug 19 '14 at 13:30
  • @NateKerkhofs Each application you build that requires that code sample. The WPF app would be one, so all instances would use the same string, unless you want to change the model to have the individual client application instances generate and securely store that entropy value on first use. – Xander Aug 19 '14 at 13:33
  • @NateKerkhofs The point is not to protect the data from a fully compromised machine, but from a second malicious application. In any case, this is probably off-topic for this question, but would probably make a good question in its own right. – Xander Aug 19 '14 at 13:35
  • @Xander Now I understand, thanks for explaining. One last question: What's the easiest way to generate such a set? I know I can simply build a program, but are there any existing solutions for this? Or I'll just create a new question for this, given that we're going off-topic with this. – Nzall Aug 19 '14 at 13:35
  • @NateKerkhofs Easy enough to get them from RNGCryptoServiceProvider().GetBytes() and transform to a Base64 string. It's about a five line console application. I don't know any existing solutions I'd recommend off the top of my head, sorry. – Xander Aug 19 '14 at 14:13
8

No, due to reasons you have already stated: Don't design your own algorithms. You can achieve resistance from rainbow tables by using unique salts, no need to mess with the constants of your hashing algorithm. The algorithms have been subject to thorough cryptoanalysis by international experts, like for NIST SP800-90 Dual Ec Prng, its likely you won't have the resources to do the same for your constants. However, in most cases the numbers are chosen very randomly. You might even have advantages from chosing your own constants, in some cases, but I still don't recommend it.

Edit: fsmv has pointed out that you shouldn't implement the hashes yourself, but use a library. I agree. If you really wanted to use your own constants, get a well-known library implementation (watch out for the license) of the algorithm, and apply your changes to that code. The implementations have been made sidechannel attack resistant.

user10008
  • 4,315
  • 21
  • 33
3

I'm not a security expert, but didn't the NSA get a bad reputation for intentionally choosing breakable constants for one of the NIST standards a while ago?

It depends on your threat model I guess. If you're worried about the NSA, then maybe you should hire the best cryptographers in the world and have them find better constants. Otherwise, just go with what's in the standards, because chances are they're pretty darn secure against other agents.

user541686
  • 2,502
  • 2
  • 21
  • 28
2

Short answer: No

The constants are part of the algorithm. If you don't trust the constants, then don't trust the algorithm. It is worth looking at some examples.

Merkle-Damgård hashes

One constant that could be changed in such an algorithm is the IV. For security analysis, changing the IV is equivalent to prepending the data with one block of key material. But the prepending would be preferred due to not needing to change the actual algorithm.

However it is already known, that just prepending some key material is not a good way to construct a keyed hash. A better approach would be to use a HMAC construction.

The IV is not the only constant in a Merkle-Damgård hash. There are constants in the compression function as well. If you choose your own constants, there is significant risk, that they would be weak for some reason. So even if all research point at the original being secure, your version might be weak. Should a weakness be found in the original algorithm, it is unlikely you managed to come up with better constants.

In fact in order to come up with better constants in that case, you'd basically have had to find the weakness and a solution. In other words if you could not write a paper about why your constants are better than the originals and get it accepted in a journal about cryptography, then your constants aren't any better.

DECDRBG

This is a good example of not trusting the constants. The original constants came with no explanation of how they were constructed. That is a warning sign, and those constants shouldn't be trusted.

Further analysis of how those constants could have been constructed revealed a possible backdoor. The entire algorithm is now considered suspect.

If you wanted to use that algorithm, you shouldn't be using the original untrusted constants. But if you generated your own constants, how could you make anybody else trust your constants?

In fact with the knowledge we have now, DECDRBG with nonstandard constants is one of the simplest way to create a cryptographic algorithm with your own backdoor and make it look like an innocent mistake.

DECDRBG shows why once there is justified skepticism about the constants in an algorithm, you shouldn't trust the algorithm until somebody come up with a real criteria to decide what is good constants for the algorithm.

kasperd
  • 5,402
  • 1
  • 19
  • 38