6

QUICK SUMMARY:

It seems like modern websites, for security's sake, should all have the client hash their own passwords before sending them to the server, who will rehash them, in order to avoid leaking the original password(likely reused across different sites), and also to make decryption necessary on an individual basis, rather than simply decrypting a single SSL key.

ORIGINAL QUESTION:

Just did a quick check to make sure, and I was amazed to see that major websites appear to still be sending passwords in their original format for logins, albeit over SSL/TLS. I understand that they are hashing and salting/peppering them before putting them into the database. However, I still see an issue with that.

It seems as if hashing all passwords with a site-unique salt on the client-side before sending them, and then hashing the hash would be substantially more secure for the clients, especially in light of recent news:

The NSA and its allies routinely intercept such connections -- by the millions. According to an NSA document, the agency intended to crack 10 million intercepted https connections a day by late 2012. The intelligence services are particularly interested in the moment when a user types his or her password. By the end of 2012, the system was supposed to be able to "detect the presence of at least 100 password based encryption applications" in each instance some 20,000 times a month. (Emphasis added)

While I understand that from an efficiency perspective, having the client hash the password is unnecessary, I'm more interested in the fact that hashing the original password would mean that even if the data were intercepted, either on the server, pre-hashing, or during the connection and then decrypting the SSL, it would only be useful for logging into the website it was intercepted for.

I'd assume that should be a major concern considering lots of people people reuse passwords, yet these massive sites still are sending the original password and hashing it on their end. Is there a technical reason behind this, or is it just old practice that should ideally be changed?

EDIT FOR CLARIFICATION: I am not suggesting that these sites begin doing all the hashing on the client-side and just throw it into their DB as-is, they should definitely hash/salt the hash. My suggestion is that the client hashes the password so that the server has no idea what the original data was, and therefore the same password can be reused, and a compromise on one website would not mean a compromise to your password across other sites. As a nice bonus, it would also limit the access discovered by malicious proxies to the sites logged into, rather than handing them your password.

Ecksters
  • 161
  • 3
  • Assuming you accepted the hash on the server side, if your database we ever compromised then you could log in as anyone because you would have their hash code. Where as if you only accepted plain text server side and hashed it there then you would need to decrypt the hash code before you could log anyone else in. –  Mar 04 '15 at 13:59
  • I added a clarification, does that respond to your comment? –  Mar 04 '15 at 14:09
  • 1
    It seems that using the mechanism you suggest, the hash itself effectively becomes the password (not from a UI point of view, but from a client-server point of view). Hence, the advantages are limited as far as the client-server interaction is concerned. – Bruno Mar 04 '15 at 14:53
  • 1
    Yeah, that's the idea, to avoid the plaintext original password going anywhere other than the user's head –  Mar 04 '15 at 14:55
  • 2
    Sure, but anyone in a position to intercept the hash would then be able to authenticate to that server anyway. It seems to make the client side more complicated, for a relatively limited improvement. – Bruno Mar 04 '15 at 15:09
  • Yeah, the intent is more to protect incompetent users from having their accounts across other sites accessed than anything. Isolate the damage automatically, so to speak. –  Mar 04 '15 at 15:12
  • @Bruno Oh, I also just realized that this would force a MitM attack to crack individual passwords, rather than breaking the SSL encryption and having a whole bunch of free data from there. –  Mar 04 '15 at 15:25
  • Bolded the final line in the question to show what differentiates this question from the question that made it marked as a duplicate. – Ecksters Mar 06 '15 at 19:34

1 Answers1

3

We have to distinguish between following two scenarios:

1) A determined hacker tries to get the passwords

In this situation SSL usually provides enough protection, well implemented SSL is very hard to break. As soon as the attacker could successfully start a ManInTheMiddle attack, the client side hash would not get more protection anyway, because the attacker could easily remove the (JS) script, which is calculating the hash.

2) An automated attack is started

Should there be an organisation which is capable of reading SSL traffic, calculating a client side hash would increase their costs, because in an automated attack they would only see the hash of the password. As soon as their interest for a site is big enough though, they could write code to automatically remove the script of this site.

If they have the hash only, they could still try to brute-force this hash. Since you will probably use a fast hash client side (SHA*), brute-forcing should be relatively easy. Using a slow hash (PBKDF2 or BCrypt) is difficult to implement, with a slow client side language like JS. Even if you would calculate a slow hash client side, you had to reduce the time on the server-side, because you don't want let the user waiting too long. Trading server-side hashing with client-side hashing would decrease security, because server-side hashing can be done faster (more rounds) than client-side hashing.

➽ This means you could make their life harder, but you could not stop them if their interest is big enough. If the user is security aware and chooses a very strong password, a client side hash could help, but this user would propably not reuse the password.

martinstoeckli
  • 5,149
  • 2
  • 27
  • 32
  • Would recent developments in Javascript(such as ASM.js) make issues regarding calculation speeds less of a problem? –  Mar 04 '15 at 14:44
  • 1
    @Eckster - I doubt it, because this is a race between a relatively slow interpreted script language, and optimized hardware like GPUs or even dedicated hardware, which can do calculations parallel. – martinstoeckli Mar 04 '15 at 14:48
  • Well, I think the ideal would be to use a hashing algorithm that can't be done with optimized hardware, such as SHA512(64-bit operations) or bcrypt/scrypt, and HTML5 has added Web Workers, so multi-core should be a possibility.. This is intriguing me a lot now :P Maybe it's a development that'll happen once the tech is there. –  Mar 04 '15 at 14:54
  • 1
    @Eckster - There is no reason this cannot be done on optimized hardware. Wellknown password cracker tools like [hashcat support brute-forcing SHA* on GPUs](http://hashcat.net/oclhashcat/#performance), not to mention dedicated NSA hardware. – martinstoeckli Mar 04 '15 at 14:59
  • SHA256, yes, but above that requires 64-bit operations, however, I'm not sure how FPGA cracking works as far as those limitations are concerned, and I'm sure that's what the NSA would likely be using –  Mar 04 '15 at 15:01
  • https://github.com/kramble/FPGA-Litecoin-Miner Well now, I stand corrected. Does there exist any hash that's designed specifically to be optimal on ARM/x86_64 architectures? –  Mar 04 '15 at 15:07
  • @Eckster The closest you'll get is `scrypt`, which has a controllable memory usage as well as its controllable time usage. That can hit commodity hardware; at the level of an attacker who can get custom circuit boards made, they can run that in parallel as well. The whole concept of password hashing is that you cannot rely on just a plain hash to provide security; password hashes have work factors, so you can control how long they take (which means you can keep them slow as computing power increases). – cpast Mar 05 '15 at 20:00
  • 1
    Note: Key derivation and hashing is available in `crypto.subtle`, which is a part of the standard Web Crypto API and implemented in native code in browsers, so the speed argument is not that strong. Even if only implemented in JavaScript, the relative cost could be comparable to a busy web server. The cost cannot be too high, otherwise attempting to log in could be an easy way to commit DOS to a server (for desktop computers, maybe not for old phones). – Rob W Mar 06 '15 at 21:04
  • 1
    @RobW - This is good news, thanks for the info. It seems that key derivation functions are [not widely supported yet](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto/deriveKey), but anyway it looks promising. – martinstoeckli Mar 08 '15 at 11:13
  • 1
    Saying the NSA can read SSL is kind of a broad statement, I would consult the following discussion: https://security.stackexchange.com/questions/60717/https-still-nsa-safe. – Ohad Schneider Aug 12 '17 at 12:51
  • 1
    @OhadSchneider - Thanks for your comment, you are right and I removed this unnecessary part from the answer. – martinstoeckli Aug 13 '17 at 20:21
  • Given an automatic attack by an attacker who is assumed to be able to break SSL, an attacker may not be willing to reveal the existence of the attack (and their abilities), and hence they may be unwilling to remove the scripts (since that would be potentially detectable). Hence, the protection from "read-only" attacks may still be desirable. – fiktor Mar 03 '19 at 21:12