24

After a password leak, is there a Levenshtein distance from which one a newly derivated password can be considered safe?

I assume yes, given that if e.g. the word was "password", and the new one is "drowssap", the distance is 8 and we have a "new" (in this case very lazy change that is surely not secure).

I was wondering about whether in targeted attacks, a hacker could build up a dictionary for a single person using a list of passwords already found (e.g. from a compilation of breaches).

I do understand that the Levenshtein distance is no good indicator for a safe password. However, I do wonder up until what maximum distance a bruteforce of all possible variations of a given password is not feasible anymore, or the other way around, if I know the password was "password" (standing for any 8-character password) in the future and I expect the person to use a similar password in the future, e.g. I had a aunt for which I know she would do this, what would I have to give her as a minimum distance so I cannot easily calculate and guess her new password in an offline attack, assuming I was able to retrieve the password hash?

I know the question is theoretical, but I aim to understand the whole thing from a bigger picture and, e.g. if the Levenshtein distance can play any important role in password security.

I am aware of all the good practices that are usually recommended.

schroeder
  • 123,438
  • 55
  • 284
  • 319
kaiya
  • 422
  • 1
  • 3
  • 11
  • 14
    This is not just theoretical, this is trying to count the ["angels dancing on a pin"](https://en.wikipedia.org/wiki/How_many_angels_can_dance_on_the_head_of_a_pin%3F) – schroeder Nov 09 '21 at 10:53
  • 20
    Appendix A of NIST 800-63B addresses this well. Avoiding password patterns is specifically why they recommend against arbitrary password complexity requirements and arbitrary password expiration times. Passwords should only be force expired on events. Timed expiration and complexity rules encourage patterns that make the password weaker. – user10489 Nov 09 '21 at 12:22
  • 21
    You present a major problem as support. Changing `password` to `drowssap` makes the 2 strings as different as possible for this metric and you even state it's not secure. That sounds to me like Levenshtein distance is not the tool for the job here. – Corey Ogburn Nov 09 '21 at 19:22
  • 1
    There are "only" 8! (40320) possibilities to (re-)order "password", but there are 26^8 (208G = 2*10^11) possibilities for an 8 character password consisting only of lower-case letters. The the first one is less than one millionth of the strength of the second. And when considering that your change is basically just reversing the old password, i't be about one of the first guesses an attacker might make. – U. Windl Nov 12 '21 at 12:40

7 Answers7

30

Levenshtein distance as a proxy for password strength is extremely limited, for the reasons that schroeder has outlined. And the user experience will probably be poor.

But the question is still academically interesting, and may be useful as a component for some use cases - so it still deserves a thorough answer. :D

Generally, the minimum Levenshtein distance between a previous password and a new one should be the same size as the minimum length of a randomly-generated password that would resist brute force (adjusted to the threat model). But it's important to note that even this minimum is often inadequate.

My answer is based on a few factors:

  1. the speed (attack difficulty) of the target hash type
  2. how much information about the previous password may be available to the attacker
  3. the methodology used to generate the initial password
  4. the usefulness of Levenshtein distance as a proxy for password change strength

First, about speed.

  • Worst case - a "fast" hash like MD5 - many different kinds of attacks become possible. This will allow us to put an upper bound on the minimum Levenshtein distance.

  • Best case - a "slow" hash like bcrypt - attack speed slows significantly, but is not as slow as it used to be (more about that below).

Second, about the user's previous password.

We must assume that the attacker has the user's previous password. Not only is password reuse chronic, but nowadays, even if the attack is against slow hash, modern attack stacks include a highly efficient "per target" attack mode called a "correlation attack" (made possible by John the Ripper's "single" mode and later by hashcat's -a 9 mode) that leverages knowledge of previous passwords, even in bulk across a large leak.

In correlation attacks, we assume that the attacker has access to the user's previous password (from a previous leak, from a different site due to password reuse, etc.). The attack takes the user's old password as a "base" and then applies a specific set of "mangling rules" (append $, capitalize the first letter, etc.) to each plaintext, but just for that user's hash. For larger batches of users and slower hashes, this dramatically increases the attack speed - because instead of trying the same candidate against every hash in a long list (as most traditional attacks do), correlation attacks apply those mangling rules just to that user's old password and then try the resulting candidate against just that user's hash.

This is not just academic - real-world cracking now heavily depends on correlating user:password or email:password leaks and cracks against new targets. So we must assume that the attacker has the previous password.

Third, about the password generation methodology.

  • Worst case - the passwords are single human-generated words that are easily found in a password frequency list (like "123456"). If a human selects the next password, the likelihood that it will be similar to that is probably high, so even very low Levenshtein distances are probably relevant.

  • Best case - the passwords are entirely randomly generated, both initially and later. In this case, Levenshtein distance isn't as relevant (or rather, measuring the Levenshtein distance between two randomly generated passwords is just a proxy for measuring the quality of their randomness).

Fourth, about Levenshtein distance as a proxy for password strength.

It is useful to think about this in terms of practical password entropy - how hard it is to attack in the real world, with full knowledge of how humans psychologically "chunk" password components. (This is also why Shannon entropy is notoriously bad at assessing the strength of non-randomly-generated passwords, because it assumes brute force - that each letter in a word is a "chunk" that has to be independently attacked - instead of how a human remembers password components ("just add my kid's name").

  • Worst case - at the lowest entropy levels, as discussed above - if a user just increments a number by 1, or appends a different special character, etc., the Levenshtein distance will be extremely low.

  • Best case - at the most complex - inserting a randomly selected character in a random position in the string is probably the highest practical entropy change (the hardest to predict) that can be made. For example, if we randomly inject 8 characters into a password, the delta in practical entropy is basically the same as if we had generated an 8-character random password.

Given all of the above ...

If we assume all worst cases (human-generated password, poor password-selection strategy, etc.), the answer should be clear: the amount of change in the password should itself be enough to withstand attack on its own - totally independent of the previous password (as if the first password was empty), and as if it was stored using a very fast hash like MD5. This is reduced to how fast we can reasonably expect a randomly generated password to be exhausted for a given threat model, and is covered well elsewhere (but can often hover around 13-15 random characters).

Unfortunately, as with many attempts to measure password strength, it's not that easy. This is because a password change with high Levenshtein distance can still have very low practical differential entropy.

For example, if the user's first password is 'password', and their new password is 'passwordrumpelstiltskin', the Levenshtein distance of 15 sounds terrific. But the practical Levenshtein distance - of real-world changes to the initial string as humans apply them - is not 15, but rather one ("add a word").

In other words ... just like Shannon entropy, usefulness of Levenshtein distance as a measure of password strength is really only useful in cases where the passwords are already randomly generated. And if you're doing that, you already have the information necessary to keep them strong, without trying to measure the difference between two of them.

In even other words ... very low Levenshtein distances are an OK proxy for password change weakness, but high Levenshtein distances are a very poor proxy for password change strength. So my answer tries to find a balance between the two.

Royce Williams
  • 9,128
  • 1
  • 31
  • 55
  • 2
    One suspects that this task calls for an extension of Levenshtein distance that allows more operations: insert a substring found in a dictionary, reverse a substring, move a substring, substitute multiple occurrences of the same original character with the same replacement character. Which naturally would be far far harder to calculate. – Ben Voigt Nov 11 '21 at 20:54
  • 1
    Indeed! It's exactly why "how strong is this password" is an unsolved problem. We can reliably detect some kinds of weak ones, but declaring one to be definitively strong is especially difficult when some kinds of human password selection psychology are so indirect. – Royce Williams Nov 12 '21 at 01:16
26

You explicitly want to know if one can calculate a "safe" Levenshtein distance for a password. The answer is "maybe" but the answer would be irrelevant.

whether in targeted attacks, a hacker could build up a dictionary for a single person using a list of passwords already found (e.g. from a compilation of breaches)

Sure, this could happen. But no attacker is going to use a complex analysis model to create a probability map for the universe of potential future string matches. They are going to look for obvious patterns that the average person might use.

  • Reversing a string doesn't require mathematical analysis. (password > drowssap)
  • Adding a temporal seed or a counter to a password does not require analysis (passwordsummer, password2)
  • Using a limited dictionary does not require calculus (Fred, Spot, Sailing, 12131969)

So, while you are interested in mathematical models to find a "safe" password that might be just on the liminal edge of an attacker's analysis model, the "safer" method is not to use a pattern for your passwords, but to use randomly generated strings, instead. And attackers don't break out their slide rulers and neural nets to get clever at guessing future passwords, they just follow the obvious patterns.

Laplace's Sunrise Problem comes into play in situations like this. Sure, you can calculate probabilities based on observable phenomenon, but that is superseded by one's understanding of how the system works. You can use Bayesian analysis to calculate to a high-level of precision how likely it is that the sun will rise tomorrow. Or, since you know why the sun rises, you can have a better answer ...

schroeder
  • 123,438
  • 55
  • 284
  • 319
  • 5
    If I'm an attacker for one valuable target, definitely, I will go for it. – kelalaka Nov 09 '21 at 13:00
  • 7
    So, you would run a Levenshtein distance algorithm on the password data set from an individual to create a probability map for strings? And not just look for creation patterns? – schroeder Nov 09 '21 at 13:41
  • 4
    Well, there are orders for search methods, it is not the first, of course. – kelalaka Nov 09 '21 at 13:44
  • thank you so far, I really appreciate your answer. I am aware that it would be best to approximate randomness on choosing passwords, but it seems that I do know quite some older people that will reject the thought and maybe could at least be motivated towards a very minor effort, thus my consideration. ;) – kaiya Nov 09 '21 at 14:48
  • 1
    The best is dicewire type, real random hard for us to remember https://xkcd.com/936/ – kelalaka Nov 09 '21 at 18:20
  • 2
    @kelalaka if you are attacking a high valuable target, generating a monster wordlist with petabytes of size should not be on your to-do list. The target may have PBKDF2 with thousands of rounds, so throwing trillions of variations of passwords at it isn't going to be productive. – ThoriumBR Nov 09 '21 at 22:23
  • 2
    @kelalaka I see your 936 and I raise you 538 https://xkcd.com/538/ – Tim Nov 10 '21 at 00:18
  • 2
    @ThoriumBR I was mainly talking about the permutation as the OPs example for which we have a good permutation generation algorithm that an attacker can use. When insert and delete are considered, it starts to be quite complicated. The amount of possible space can be tremendous. The edit/delete will be the last resort for me. – kelalaka Nov 10 '21 at 17:08
  • 1
    Extra points for invoking the Sunrise Problem - insightful. – Royce Williams Nov 10 '21 at 18:53
  • Oh, for heavens sake, "randomly generated strings" -- impossible for people to remember -- are _terrible_ for security. Passphrases ftw – Auspex Nov 12 '21 at 19:44
7

From a security perspective, this metric is meaningless.

First, a password leak typically leads to a trivial attack in which the attackers will simply attack all accounts in the leak, looking for those who did not change their passwords at all. These are the low-hanging fruits and why should an attacker bother with your account that changed its password when he can have hundreds that didn't ?

Second, Levenshtein distance is not how brute-force cracking tools work. Even if an attacker targets you and wants to brute-force one of your accounts using past password leaks involving several passwords of yours, there are two possibilities:

a) your passwords follow an easy-to-spot pattern, such as "name-of-site + constant string" or "constant string + year" or "constant string + random string" or something similar. In that case, a targeted attacker will write a custom brute-forcer following your pattern.

b) there is no visible pattern. The attacker will rely on his standard cracker tool, possibly tuning it to not even try passwords that clearly fall outside your choices (e.g. if all your passwords are 8+ characters, he'll start at that length, or if none of your passwords use numbers, he'll exclude those).

In the case of b) the common thing that crackers do is letter substitutions, i.e. reversing strings, switching letters around (password => dassworp, apssword, etc.) and common substitutions (passw0rd, etc. (that's a zero)).

Neither of those things have much to do with Levenshtein distance, though you could calculate one for each, it says nothing about, say, the time until the cracker tool will try that variation.

Tom
  • 10,124
  • 18
  • 51
  • 2
    Well, as I argue, it's a mixed bag, but not *totally* without value - very low Levenshtein distances are an OK proxy for password change weakness. But high Levenshtein distances are definitely a *very* poor proxy for password change strength. – Royce Williams Nov 10 '21 at 18:52
4

Levenshtein distance is the incorrect metric here. The correct cryptographic metric for the security of a password is the size of the output keyspace (that is, the image of our procedure - the set of possible outcomes). This size is upper bounded (but not necessarily equal to†) by the number of distinct operations you perform. Under Kerckhoffs's principle we assume that the attacker knows all the possible operations we can perform in our system, and that our only secret information is our choices within this freedom.

In this model we assume that the attacker knows the old password. Say this password is 16 characters long, and lowercase alphanumeric (36 characters), and we keep it this way, simply replacing 2 characters with new ones. How strong can our new derived password be?

Well, we have 16 choices for the first character to replace, 15 for the second character, divided by two because order doesn't matter. Then each character has 35 choices to be replaced with, giving a total output space of

16*15/2*35*35 = 147000 outcomes
log2(147000) ~= 17.1 bits of security

If we had chosen a new random 16 character lowercase alphanumeric password we would have had 16 log2(36) ~= 82.7 bits of security. In other words, our derived password is ridiculously insecure.

What if we added the possibility of reversing our password? This (for most passwords, but not palindromes!) doubles the output space, adding 1 bit of security to our upper bound. You can add a whole lot of operations like this (add an exclamation mark, swap case, etc, etc), but each only increases our security upper bound only by how much freedom of choice we add. A binary yes/no isn't a whole lot of choice. Nothing adds more freedom than simply generating a new password.


† The reason it's an upper bound and not exact is because multiple series of operations might result in the same outcome, inflating our upper bound to above the actual key space. For example if you allow the operations "add a character", "remove a character" and "replace a character", then add/remove can simulate a replacement, giving us the illusion of more choice than we really have in possible outcomes.

Finally, humans are biased, so unless you write a computer program that fairly chooses between all possible modifications to a password, you will have even less security as e.g. you are more likely to add your birthyear to a password than a random two digit sequence.

orlp
  • 391
  • 2
  • 15
2

Using a new password with a Levenshtein distance of 1 would probably not be too smart, as the cost of checking a good subset of that for each password wouldn't be too high, and may be successful enough to be worth checking (2 still wouldn't be too smart, but this is less practical to check for each password).

Checking further than that with Levenshtein distance isn't all that effective. People don't typically use arbitrary additions, removals or replacements to pick new passwords (at least from my experience, but the key word is "arbitrary"). Also, the number of possible strings (very) roughly scales exponentially with Levenshtein distance. If you just consider replacement with lowercase letters, "abcdefgh" has 26*8 strings distance 1 away from it, around (26*8)² (or more like 26²*C(8,2)) with distance 2, around (26*8)³ with distance 3, etc. So checking all of those would be quite time-consuming and not likely to be successful (checking (26*8)³ passwords may only take a fraction of a second, but that's still not a good use of time compared to other strategies for checking passwords people might use). Although if you've used this strategy before, or you're the victim of a targeted attack and they've tried all better guesses for your password, or you've posted a question about this on a public Q&A, then it wouldn't be too unreasonable for them to try this.

But, off the top of my head, there are a few things indirectly related to Levenshtein distance that people might commonly use (and hackers might therefore commonly check for):

  • Changes involving common suffixes may be checked (e.g. "password" → "password1!" → "password2@" or "password5" → "password6")
  • Changes in capitalisation may be checked (e.g. "password" → "PassworD")
  • Common variants of words may be checked (e.g. "color" → "colour")
  • Letters replaced with numbers or symbols may be checked ("correct horse battery staple" → "c0rr3c7 h0r53 84773ry 574p13" would still be a terrible password change even though the Levenshtein distance is 16)

Although the above strategies for picking passwords are terrible regardless of whether it's used after a leak or not.

Of course if you use any other strategy that could make it easy enough for them to guess one of your passwords if they know other passwords of yours (or if the resulting password is bad for other reasons), that's not a good strategy.

Really the only strategy you should be following is to pick a new, good and unrelated password (or, better yet, just use a well-known password manager to generate one). This should already guarantee that the Levenshtein distance is high enough for it to not be practical to try to find the new password using Levenshtein distance, so it's not something you should specifically worry about.

NotThatGuy
  • 698
  • 5
  • 6
0

I think the question is possibly relevant. Probably a better example would be: Suppose Donald Trump has chosen a password of "My password is so long that no computer today could possibly brute force it!". Now, supposing that some site is storing the password unhashed and is breached. If Donald Trump changes his password to "My password is so long that no computer today could possibly brute force it@", I think it's likely that some researcher or hacker would be able to guess the change.

-1

Security at the expense of usability comes at the expense of your active user count.

I'm sure you know the experience. The site states a list of specific criteria you must have in a password - mixed case, a digit, certain special characters or none. You comply, but the system rejects your password anyway due to another rule they didn't say. Maddening.

Complex distance requirements will be completely impossible to communicate to the user. There is no practical way of dumbing down "must have sufficient entropy compared to all past passwords" to where people will sympathize with your requirements.

Keeping their old passwords around "in the clear" is a cringeworthy mistake

To make a comparison you need to have their old passwords stored "in the clear". How did that even happen? Most likely it happened because you do store their current password in the clear. Whoops!

The only way I can see you pulling that off is at the password-change screen, grabbing their old password as they change to their new one.

However, this is still a bad tactic. When you inevitably do get hacked, you will be giving hackers a treasure trove of past passwords, including passwords that were set before you started your entropy policy. They will have no trouble using algorithms to anticipate other passwords the user might use on other sites.

  • Hey, thanks for the answer, but I do feel like this is kind of of topic as the question was about Levenshtein distance and not about how passwords are stored server-side, and even then would I disagree because this is nothing that would require a server-side check, but could be done client-side ;) – kaiya Nov 13 '21 at 17:59