38

Edit: Updated to put more emphasis on the goal - peace of mind for the user, and not beefing up the security.

After reading through a few discussions here about client side hashing of passwords, I'm still wondering whether it might be OK to use it in a particular situation.

Specifically I would like to have the client - a Java program - hash their password using something like PBKDF2, using as salt a combination of their email address and a constant application-specific chunk of bytes. The idea being that the hash is reproducible for authentication, yet hopefully not vulnerable to reverse-engineering attacks (to discover the literal password) other than brute force if the server data is compromised.

Goal:

The client side hashing is for the peace of mind for the user that their literal password is never being received by the server, even if there is the assurance of hashing it in storage anyway. Another side benefit (or maybe a liability?) is that the hashing cost of an iterated PBKDF2 or similar rests with the client.

The environment characteristics are:

  1. All client-server communication is encrypted.
  2. Replayed messages are not permitted. ie. the hash sent from the client cannot effectively be used as a password by an eavesdropper.
  3. Temp-banning and blacklisting IPs is possible for multiple unsuccessful sign in attempts within a short time frame. This may be per user account, or system wide.

Concerns:

  1. "Avoid devising homebaked authentication schemes."
  2. The salt is deterministic for each user, even if the hashes produced will be specific to this application because of the (identical) extra bytes thrown into the salt. Is this bad?
  3. Authentications on the server end will happen without any significant delay, without the hashing cost. Does this increase vulnerability to distributed brute force authentication attack?
  4. Rogue clients can supply a weak hash for their own accounts. Actually, not too worried about this.
  5. Should the server rehash the client hashes before storing?

Thoughts?

Foy Stip
  • 391
  • 1
  • 3
  • 7
  • 5
    I think I'm a bit confused about the threat model. What is the threat against which this defends. Is the password stored on the server? If not, then the token passed from the client to the server _is_ the password. If it isn't, then how do you prevent replay attacks. I see things I like in here, but I can't evaluate the scheme without understanding the threat model. – MCW Oct 23 '12 at 10:33
  • 2
    Sorry I really should have emphasised the third.. now fourth paragraph more, which outlines the goal of this versus a more conventional scheme: peace of mind for the user in the server not receiving their literal password. The server not having to perform an expensive PBKDF2 hash performed over n iterations is a bonus (although as a poster has pointed out it could also add defence against DOS). However I'm not trying to cater for a specific threat model other than the usual. – Foy Stip Oct 23 '12 at 23:24

8 Answers8

58

Hashing on the client side doesn't solve the main problem password hashing is intended to solve - what happens if an attacker gains access to the hashed passwords database. Since the (hashed) passwords sent by the clients are stored as-is in the database, such an attacker can impersonate all users by sending the server the hashed passwords from the database as-is.

On the other hand, hashing on the client side is nice in that it ensures the user that the server has no knowledge of the password - which is useful if the user uses the same password for multiple services (as most users do).

A possible solution for this is hashing both on the client side and on the server side. You can still offload the heavy PBKDF2 operation to the client and do a single hash operation (on the client side PBKDF2 hashed password) on the server side. The PBKDF2 in the client will prevent dictionary attacks and the single hash operation on the server side will prevent using the hashed passwords from a stolen database as is.

David Wachtfogel
  • 5,512
  • 21
  • 35
  • 1
    You're right, there's the compelling reason to re-hash on the server: if an attacker gets a copy of the database without necessarily having write access, if I understand you correctly. Based on other's responses so far I'm still wary of the viability of this whole scheme but for the moment it seems like the combo of a heavy hash on the client and a final re-hash on the server is sounding promising. Thanks for the response. – Foy Stip Oct 23 '12 at 23:25
  • 2
    Other reason to hash client-side? If someone wants to automate login and routine tasks on your app (for example, in a `cron` job), if you don't use hashing, they need to store the password somewhere on the drive in plaintext. Sure, they could encrypt it, but then they'd need to be there to type in the encryption key, so it amounts to the same level of inconvenience. – Parthian Shot Jul 30 '14 at 20:01
  • 1
    And the argument against hashing client side- that it doesn't solve the problem solved by hashing server-side- is spuriouis. The client-side hashing layer can be transparent and not affect any aspect of the backend. All hashing client-side does is make Eve's life harder, at minimal cost to the end user. – Parthian Shot Jul 30 '14 at 20:03
  • Hell, one could even do a single iteration of MD5 on the client and be safe. With a a fully randomized hash coming in from the client, you are going to have a huge search space on the server side to rehash. Even super fast MD5 is still going to be a very long time to search that entire space. – Jason Coyne May 19 '16 at 19:32
  • 4
    @JasonCoyne If the client only does a single MD5 and the server only does another single MD5 hash on the result an attacker who gains access to the server's hashed password DB can do a simple attack (even assuming salt). For each hashed password in the DB, take the most popular passwords, hash them twice and compare them to the hashed password in the database. Even if you take the 10 million most popular passwords this will take less than a second to run on each hashed password in the DB. – David Wachtfogel May 22 '16 at 05:04
  • 1
    I wonder why this isn't common sense, I mean browsers should be giving out alerts when you send a password input in plain-text. What if I don't trust xyz.com? I mean I'm already using random passwords but most people have 3 passwords and cautious ones 4. – EralpB Jan 23 '17 at 14:07
  • 3
    @DavidWachtfogel Don't ever assume a particular client. If you offload the heavy hashing to the client, attackers with specialized hardware will have an easy time of it, and users with poor hardware will have a hard time of it. Exactly the opposite of what you want. The beefy hash should be on the server since you know that hardware, and the quick, reassurance hash should be on the client. – NH. Oct 12 '17 at 21:27
  • Hashing is not the right way to protect credentials. Encryption is. If you do a pure hash on the client, then an attacker will not get the plaintext password, but they could still do a replay attack. You would need at least a session-specific salt in the hash to prevent replay attacks, which one could call a weak form of (one-way) encryption. But better go for proper encryption. Which SSL/TLS/HTTPS/IPSec are supposed to give you. – Christian Hujer Jan 03 '19 at 15:27
  • One still must use salt on the server side. – kelalaka Mar 19 '20 at 22:05
  • Just seen this old answer... some interesting insight about slow hash client-side and fast hash server-side. I've considered the same, although not implemented it. But your answer is the oldest reference I've seen for this idea. – paj28 Apr 18 '21 at 10:59
13

There are few time when client-side hashing is worthwhile. One such circumstance is when the hash process is computationally intensive, which can be the case with PBKDF2.

Addressing your concerns:

  1. Also avoid unvalidated suggestions about cryptography you find on the internet. (Disclaimer: I am not Bruce Schneier.)
  2. Deterministic salts aren't a problem--the only real requirement of the salt is that it is unique for each user. The salt's real purpose is to prevent a brute force on one password from turning into a brute force on all passwords in the case of a compromised database. Even if you were to store a random salt in your database right beside the hashed password you would still reach this goal, provided each users' is different.
  3. As I mentioned above, PBKDF2 is nice because you can arbitrarily decide the computational difficulty of the hash. You could select a c such that a single hash on modern hardware takes seconds--effectively eliminating risk of an API level brute force attack. (Of course, your client's might not enjoy such a long delay at login.)
  4. An users can choose simple passwords--they are only hurting themselves. If you wanted eliminate this risk, you would have the server generate the hash the first time, provided ithe password is going over an encrypted channel.
  5. Yes, and you will need to uniquely salt these as well. In the event of a database compromise, you want to ensure that the attacker doesn't get information that allows him/her to directly authenticate as any user on your system. One caveat here is that you do not want your server-side hashing to be computationally intensive the way your client-side hash is. If your server-side hash takes too much effort, you open yourself to a CPU-exhausting Denial of Service attack vector--an attacker simply spams empty password authentication attempts over Tor, passwords which your server has to try hashing before it knows they are fraudulent, eventually leaving you with an overwhelmed server..
Motoma
  • 1,157
  • 7
  • 13
  • Thanks for the input on all of those points. Re: point 2, yes this was/is my understanding.. that it would be OK, as long as the salts are always different. For this scheme the salt will be different for different emails, and also different for the _same_ emails across _different_ applications/databases. – Foy Stip Oct 23 '12 at 23:30
  • Yes as you and others have pointed out, re-hashing on the server will definitely be necessary, though it will be done more cheaply as you say. Hmm.. Tor requests from the same user or from different machines behind the same NAT router will appear to originate from different IPs, won't they? – Foy Stip Oct 23 '12 at 23:30
  • Yes, there are simple tools out there that allow multiple requests from a single machine to be routed through multiple Tor exit nodes. – Motoma Oct 25 '12 at 10:52
10

If you hash the password on the client side, whatever the result is IS the password, so you're not gaining any real security. Any hack or information leak that would have revealed the plain text password will instead reveal the hashed password, which is the real password.

This shouldn't be confused with zero-knowledge authentication schemes, where an exchange of messages proves that the client knows the real password, without actually transmitting it.

Matthew
  • 27,233
  • 7
  • 87
  • 101
ddyer
  • 1,974
  • 1
  • 12
  • 20
  • 8
    If you mean the case of the server's DB being compromised, as David Wachtfogel pointed out, then yes you're right that the hash could just be retransmitted as a password. It needs to be re-hashed on the server. The gain is not beefed up security, but peace of mind for the user in the server not receiving a literal password. If on the other hand you mean man-in-the-middle attacks then the underlying encryption scheme protects against replay of anything, be it literal or hashed passwords. – Foy Stip Oct 23 '12 at 23:27
  • 2
    Sorry but "will instead reveal the hashed password, which is the real password" isn't really true from user's perspective. If there is a leak then I bet every user would prefer to leak a hashed password instead of a plain password especially if he uses the same password for different services which many users do unfortunately so it's best when password is hashed on both client and server side. – Leszek Szary Jul 30 '19 at 07:04
  • @LeszekSzary It's very true when the attacker uses that hash to log in as me. It's a *big* deal when they lock me out of my account, steal my money, or speak in my name. Whether it was cleartext or a hash doesn't matter as much to me. Conversely, I hardly care that my hash was leaked if the attacker can't crack it or use it for authentication. Chances are I'll never know it was leaked in the first place. Most data breaches aren't reported. – stewSquared Oct 23 '21 at 06:45
8

Hashing on the client can be a good idea in some circumstances and for some reasons, but I would not make "user's peace of mind" one of them. I am all for users to be in an harmonious frame of mind and at one with the Universe, but I find dubious the idea of promoting a way to induce users to reuse the same password on several sites.

A good case for client-side hashing is the way some "password safes" work: they compute a site-specific password by hashing the user's "master password" together with the site name. This gives most of the usability of always using the same password everywhere, while not actually giving your master password to dozens of distinct site. But this works only as long as the password derivation algorithm is generic and not changing; this seems to be much better addressed by a Web browser extension than by an applet coming from the sites themselves (all the sites would have to cooperate so as to use applets which use the same password derivation algorithm, with site-specific data).

Another good case for client-side password hashing is when a slow hash is used (so as to make password cracking harder for an attacker who could grab a copy of the database of hashed passwords); it is tempting to try to offload the cost into the client, since, when the client wants to connect, it is mostly idle and actively interested in connecting. However, slow hashing is an arms race between attacker and defender. Using Java will induce a slowdown (by a typical factor of 3), and some client systems can be quite feeble (e.g. cheap smartphones or ten-years-old computers). This is like picking up a sword instead of an assault rifle before entering a battle where the opponent will bring a tank).

But if what you want, as a user, is to protect your password against sloppy storage procedures by a site, then the right way to do it is to choose a different password for each site. (Personally, I keep a file of passwords, and all my passwords are generated randomly.)

Thomas Pornin
  • 320,799
  • 57
  • 780
  • 949
  • Thanks for taking the time to share your expert insights (I've read a lot of your other posts) on this question, and sorry for taking so long to respond - I don't hop on here very often. After weighing up the considerations I did end up implementing the client-side hashing, as much giving in to the temptation to offload the cost of a slow hash to the client (as you say) as having the server avoid handling raw passwords. As I wondered in the OP though and also as someone pointed out, a server-side rehash is necessary anyway, albeit a more lightweight one than PBKDF2. – Foy Stip Jul 24 '13 at 06:27
7

It seems like you're trying to invent your own cryptographic protocol. From the description of your approach, it does not seem like you have the background to do this in a secure manner. I highly recommend using existing protocols instead of creating your own.

First, it's not clear what threat model you think you're circumventing. You cite something called a "reverse-engineering attack" which has no real definition or meaning.

Second, your understanding of a salt's purpose and best practices for its generation appear to be lacking. Salts are not required to be kept secret. You can (and should) generate a unique salt from a CSPRNG for each new authentication token, and not from something like an email address (which might change). Fixed application-specific salts are sometimes called "peppers", and I am unaware of any cryptographic literature which supports or encourages their use.

Third, PBKDF2 is okay, but seriously just use BCrypt. BCrypt was designed for this and is in widespread use. Good BCrypt implementations will handle salt generation and work factor calibration / autodetection for you. You will have to implement these things yourself to use PBKDF2, and you will almost inevitably make mistakes.

Fourth, there is an existing approach to what you appear to be trying to do. Zero-knowledge authentication can be performed with SRP. The user's password is never transmitted over the wire, and a man in the middle cannot sniff anything useful with which to authenticate themselves. However, it is apparently difficult to implement correctly and there are not many existing libraries to do so, which should give you an indication of how difficult the problem actually is.

Long story short: Don't invent your own crypto. Use solutions and protocols that are widely implemented and have withstood the test of time.

Stephen Touset
  • 5,736
  • 1
  • 23
  • 38
  • 3
    Yes I'm very wary of doing things outside of established protocols but if a client side hash of the password (versus sending it in literal form, albeit still encrypted and nonced) is a no-no I'd like help to understand _why_ it's bad. I've edited the question to put more emphasis on the goal: peace of mind for the user in not having their literal password received by the server. – Foy Stip Oct 23 '12 at 23:31
  • In the case of a changed email, the user would have to supply their password at that time so that it could be re-hashed but otherwise that should be fine. The salt will be different for different emails, and different from that used in other applications because of the fixed extra salt thrown in (specific to this app) for each hash. Also, BCrypt is definitely a possibility that I will consider. Thanks for your thoughts. – Foy Stip Oct 23 '12 at 23:32
  • The very first reason it's a no-no is that you *will* get it wrong. I don't mean any offense by this, but there are simply too many places to make a critical mistake for you to have any reasonable expectation of success. Whatever security might be gained from not sending the password over the wire will be obliterated the second you compare the digest to the stored value using the database, or your language's built-in string comparison function. Or when you make a mistake on one of a dozen other operations. – Stephen Touset Oct 24 '12 at 00:08
  • 1
    Another reason is that having read access to the database is now enough for an attacker to impersonate any user. Security is about defense in depth — layering security so that an attacker exploiting a failure in one level of security is still stymied by deeper layers. Converting a large security failure into an absolutely critical one is the exact opposite of that goal. – Stephen Touset Oct 24 '12 at 00:21
  • No offense taken, I appreciate the warning and your time to consider the prob. I'm wary of any scheme which seems out of the ordinary, hence my first concern in the OP. From what you're saying though should I be feeling unsafe about sending anything at all across an encrypted, replay protected stream? ie. the protocol that sits beneath this client authentication? To help me understand, what are some specific pitfalls for this case? You're right about the read access to the DB effectively providing free sign in. Would a (lightweight) re-hash on the server mitigate this? – Foy Stip Oct 24 '12 at 00:51
  • Yes, but you're playing whack-a-mole with security vulnerabilities. SSL is perfectly fine to send sensitive information over. PBKDF2 is perfectly fine for obfuscating passwords (but BCrypt is probably better). But it's the implementation of *everything else* that will be weak. You are thinking at too low a level about the problem for your level of proficiency. You should not be designing authentication protocols yourself, but instead should be choosing from preexisting approaches that have been designed by professionals and which have withstood the test of time. – Stephen Touset Oct 24 '12 at 01:20
  • As an example, how do you plan to validate authentication credentials received over the wire? `SELECT * FROM users WHERE email = ? AND password_digest = ?`?. Load the user by email first and do `user.password_digest == params['password_digest']`? You've just allowed an attacker to authenticate as any user. If you don't immediately see why, that should be a strong warning sign that you're walking blindly through a minefield. – Stephen Touset Oct 24 '12 at 01:25
  • 1
    Are you talking about SQL or other types of code injection? – Foy Stip Oct 24 '12 at 02:07
  • No, but those approaches are both vulnerable to timing attacks. After some thought, I think you're looking at this from the wrong perspective. You shouldn't be asking the question, "in what ways are the scheme I've invented insecure?" It should be *assumed* insecure, and the goal should be to demonstrate otherwise. – Stephen Touset Oct 24 '12 at 07:03
  • 1
    I am trying to adopt the mindset of an attacker but maybe I have been too focused on possibilities such as DDOS while missing other more obvious holes.. but that's also why I'm asking for feedback from experts here. I'd much rather be shot down here than on a live server. – Foy Stip Oct 24 '12 at 20:35
  • 1
    Are timing attacks feasible over an internet connection, even on a server with a low or steady load? I'm not sure that even requests with identical payload using a conventional versus slow equals would make a blip on the radar when you take into account the variable code paths (secure random and hash table buckets) and thread alertness, all executing on a colossus like Java where there's a lot happening behind the scenes. It's an interesting possibility though. – Foy Stip Oct 24 '12 at 20:38
  • 1
    [They are](http://crypto.stanford.edu/~dabo/papers/ssl-timing.pdf), according to Stanford researchers. The correct solution is to use a constant-time comparison algorithm. Lockouts after failed attempts also help mitigate the issue, but it's best to simply fix the root problem — policy about lockouts might change in the future. The point wasn't about the one attack, though, but that this stuff is *hard*, and there are an uncountable number of ways to mess it up. Work with high-level constructs (e.g., "authenticate user") instead of low-level ones ("AES" or "PBKDF2") whenever possible. – Stephen Touset Oct 24 '12 at 21:38
  • 1
    Yes it's easy to dabble in, difficult to get right. Even with your insights though I'm honestly still inclined to proceed with the client side hashing for this particular project as it is more of a learning exercise than something mission critical. That Stanford study was interesting, at least the non-mathematical parts that I could understand. It makes me wonder whether attackers must try to get hosting on the same colocation as their target as a means to get a prime network position for the timing. Anyway thanks again for all of your time and insights, greatly appreciated. – Foy Stip Oct 25 '12 at 00:25
4

A Google employee is working on something similar called TLS-OBC. This RFC draft allows the client to hash the password and bind it to a TLS session.

Specifically you may be interested in this website http://www.browserauth.net/origin-bound-certificates

and this link on Strong User Authentication http://www.browserauth.net/strong-user-authentication

::Update

OBC and possibly the other one is now integrated in the FIDO authentication standard.

makerofthings7
  • 50,090
  • 54
  • 250
  • 536
3

Password hashing helps prevent anyone except the user from learning the password. People reuse passwords and they are usually not very strong if memorised. That's why we bother with slow hashes (Bcrypt, Scrypt, Argon2, etc.) instead of a fast hash: it protects the user's password better, even though it does not have any benefit for the application. A secondary benefit of having the server hash an incoming password before storing it in the database, is that someone who compromises the database cannot login with any of the values they found (they first have to crack them).

We want to keep both benefits. To that end, we should hash on the client (slow) and the server (fast).

Why on the server?
Such that an attacker who obtained the database cannot use the obtained data to log in. If your client does a slow hash already, this can be a single round of some secure hashing algorithm (like SHA-3 or BLAKE2).

Why on the client?

  1. Transparency
    Whenever a company gets hacked, we have to hope they tell us how the passwords were hashed, if at all. There are still places that store them plaintext, or use a fast algorithm, or don't salt them, etc. By doing hashing client-side, those interested can see how it is done and prevent questions like these. My mom won't be checking it, but my mom also does not check a website's TLS for heartbleed: 'ethical hackers' or white-hat hackers do.

  2. Offload the server
    A risk of doing very slow hashes on the server is that an attacker can use it for a denial of service attack: if the server needs 2 seconds to process each login attempt, attackers have a huge advantage in trying to take it down. By doing the slow hash on the client (whose CPU is unused 99% of the time anyway, and it's not as if most people login to something more than a few times per day), you offload the server, allowing you to choose slower hashes without risking an attacker taking down your server.

  3. Reduce impact of compromised transmission
    If the secure transmission channel fails for some reason, for example a bug in TLS (June 2020: "Whoops, for the past 10 releases most TLS 1.0–1.2 connection could be passively decrypted"), the impact is lessened because an attacker would only be able to observe the hashes instead of the original password. This also applies when someone is able to decrypt an encrypted stream years later, for example because RC4 is now known to be broken more than a few years ago.
    Note that in the bug from June 2020, authentication does not seem to be affected, so attackers couldn't have modified the JavaScript files to remove the hashing. Now, I have to wonder which passwords I used since the past months over https and if any of those connections might have been passively intercepted.

  4. Improve the standard
    Now that we can see everyone's implementation, there will be debate on who got the longest and who got the best. This discussion definitely pressures the bad ones to do better, and it probably also results in standardisation. It would be really neat if you could just add a parameter to the password input element, like <input type=password hash=v1>, which would do all the hashing for you (it would salt with the domain name and username field, but all those details of this proposal are beyond the scope of this answer).

  5. Detection of compromise
    A common argument is that "if the server is compromised, attackers could remove the JavaScript code responsible for hashing the password, allowing the attackers to obtain the plaintext anyway". This argument is only valid for websites, but people usually fail to mention that, resulting in that nobody hashes client-side in applications, either. What they additionally forget is that, if client-side hashing is common, security researchers can start using it: there will be people that scan important websites to see if the hashing is still there (it would be quite the indicator of compromise if PayPal removes client-side hashing (in the hypothetical case where they would do client-side hashing in the first place)), and there could be browser extensions (or built-in features) that warn you if it's removed, just like we warn for login forms on http pages.

  6. No secret snooping
    Even if they hash the database, employees could intercept the password before it is hashed and stored (by logging at a TLS termination point or by installing some extra code). There are enough stories of sour employees, or even teenagers building some application and want to know what their user's passwords are.

  7. No plaintext emails
    If the server does not have your password, they cannot send you email like "Thank you for registering, your username and password are xxx". Email is commonly considered insecure, but there are still websites that do this.

  8. No accidental exposure
    There was an incident somewhat recently where passwords were accidentally written to log files. I will not mention the company name because I don't think it's relevant, but it was some big tech company with a dedicated security team and everything. Accidents like these just happen and it had quite a big media fallout. There is additionally exposure in many networks where a separate system does TLS termination and then sends the request plaintext to other systems, allowing any network box to see the passwords (Google used to do this until they got wind of the NSA having intercepts in their internal network). "Plaintext" password intercepts would be less of an issue if we did the hashing before the password hits the network.

  9. Why not?
    I cannot think of any reason why you should not do it, assuming you additionally do a quick hash on the server (to get that secondary benefit mentioned in the first paragraph). Even if you think only one of the reasons is compelling, then it would still be an improvement.


I think the only reason client-side hashing is not common, is because it's uncommon. Whenever it's proposed, people wonder why it's so rare if there are only benefits. The decision to only do server-side hashing seems to be often rationalized. These are some of the arguments I heard before:

  • "An attacker would just remove the code responsible for hashing!" This only applies to web applications. Normal applications (among which mobile applications) do not have this issue.
  • "But my case applies to a web application!" If client-side hashing would be common for web applications, we would probably see standardisation (think <input type=password hashversion=1>) similar to how languages and frameworks have standard hashing functions for developers to use. If a field is sent plaintext, the browser could indicate that. We can easily standardise this, but only if developers actually express an intent to do this and we choose to care.
  • "But whatever the result [of the client-side hash] is IS the password!" If your login database is compromised, it's unlikely that an attacker still needs to login to your application to grab all the data from it. The defense-in-depth of most systems really is not that good, but even if it is, the argument is void because the advice is to do an additional quick hash on the server.
  • "You should just use a password manager instead of relying on the security of hashes!" I agree, that would be ideal. Two factor authentication everywhere, used by everyone, with password managers and smartcards... yeah, when that day comes, we do not need password hashing anymore at all. You should still do a fast hash on the server for the aforementioned secondary benefit, but we can stop doing slow hashing algorithms and client-side hashing because nobody would re-use passwords anyway.
  • "You should not put the user in charge of security!" They are already in charge of security: you can always choose to have 123456 as a password (or if there are silly requirements, you can still do "Bailey2009!"). You can also publish the keys to your TLS connection and nullify its security. The user will always be able to mess up any security you implement if they want to.
  • "But how will you salt the hash?" The username field, perhaps combined with a domain or company name to make it globally unique. A salt is not secret, only unique, just like your username field.
Luc
  • 31,973
  • 8
  • 71
  • 135
  • +1, a couple points though: 1) smartphones limit the amount of processing power you can put towards client-side hashing if you want your web/mobile app to be usable. 2) Salting with username is a lot better than nothing, but it's not ideal. Usernames aren't globally unique, and they're fairly predictable. Probably not a big deal, but it doesn't feel that great. – AndrolGenhald Jan 10 '19 at 02:01
  • Also, the [web crypto api](https://www.w3.org/TR/WebCryptoAPI/#pbkdf2) supports PBKDF2, allowing for use of a native (and not horrendously slow) implementation, but there's no support (yet) for Argon2 or Bcrypt. On the other hand, this [bcrypt js demo](https://fpirsch.github.io/twin-bcrypt/) using asm.js runs on my phone in only a few seconds, and while that's a couple orders of magnitude more than a native implementation on my desktop, it may still be acceptable. Perhaps hardware and js improvements have made it no longer a concern. – AndrolGenhald Jan 10 '19 at 02:08
  • Recently there was a case where a company banned an account because the user used a password that the company didn't like, which led me to this problem of websites sending my the password I type in plaintext. A lot of people argue that client hashing is not necessary, and most of the time the argument is indeed just "don't be silly! No one does it!" or their arguments impose ridiculous constraints such as servers will suddenly stop using proper password storage because they "offloaded" it to the client? Lol – MxLDevs Jun 05 '20 at 22:41
  • @AndrolGenhald it's a smartphone if you can play league of legends you can hash a string. I assume devices in 2019 were decent. Seems like the argument is "it's not THAT much better, might as well not bother"? – MxLDevs Jun 05 '20 at 22:43
-4

As one of the users stated "Luc", client side hashing is not common.

Here are the reasons why client side hashing is far more better than clear text which makes them uncommon:

People who are saying that hashing password becomes the password have no idea what they are talking about. This is not the point of hashing a password which they completely fail to realize.

Hashing a password in terms of security point of view is to make it extremely difficult for the hacker to know what the real password is in clear text. So that it can not be used at other websites.

Hashing passwords on client side is not to help protect the server or database, it is to solely protect the user's privacy and credentials.

Hashing passwords on client side also adds many layers of advance protection of user credentials due to advance algorithms implemented by smart tech companies who knows how to implement algorithms which can detect a hashed password from a hacker and from a real user.

One implementation is to have the server distinguish hashed password which was sent that was originally typed as clear text. The server can automatically detect if the hashed password was given by a hacker who did not typed the clear text form of the hashed password. This is not so hard to implement. The benefit of this is that the hacker will now have a difficult time to figure out the clear text form of the hashed password or reverse engineer the server's detection algorithm.

Another implementation is to have the server change it's algorithm of detecting a hashed password that was given by a user in an adaptive manner on a weekly basis. The server's algorithm which talks to the user for a hashed password has now become into an organic organism which will be unique and difficult to understand and reverse engineer by a hacker since the algorithm changes weekly. It's like an organism (algorithm) fighting and preventing viruses (hackers). The hacker will have a difficult time of sending intercepted hashed passwords to the server since they never typed the clear text password to begin with and the server will keep rejecting the hashed password sent by the hacker.

I am not going to get into details of how to exactly to implement it but real professional software engineers will get the idea and can easily implement it. So this is my answer to those people who just says that hashed password is the same as clear text, NO it is not the same in this advanced modern world. They are two different things and advance algorithms can detect many things about a hashed password. Giving hackers full of opportunities to see clear text password is just foolish and dumb. Never rely on TLS "secured" connection, these secured connections can always be easily intercepted by hackers without people being aware of it.

And yes hashed passwords can be run through data crunching machines to know the clear text form of the hashed password but its going to cost them a lot of time and money for hashed passwords that was originally hashed by slow hashing algorithms such as Argon2 which was also peppered and salted by the client.

This is why its just smatter and better to have the client hash their own passwords and send it over to the server.

S To
  • 1
  • 1
  • The purpose of hashing is not to reduce the impact of password reuse. That's a separate issue. You are confusing and combining several issues into one big thing and coming to the wrong conclusions. – schroeder Apr 18 '21 at 15:44
  • "I am not going to get into details of how to exactly to implement it but real professional software engineers will get the idea and can easily implement it. " -- you are treating the entire issue like this one possibility you've imagined is actually true and possible, That makes this entire answer a work of fiction. [Confirmed](https://security.stackexchange.com/questions/248485/security-technologies-that-implements-security-through-organic-like-obscurity) You don't even know that it is possible... – schroeder Apr 18 '21 at 15:52
  • 1
    *due to advance algorithms implemented by smart tech companies* - Do you mean the algorithms can be implemented on client, but not on server? Seriously? Have you ever programmed anything? Your statement has nothing to do with reality :-) – mentallurg Apr 18 '21 at 15:56
  • 1
    *algorithms which can detect a hashed password from a hacker and from a real user* - This is bullshit, sorry. – mentallurg Apr 18 '21 at 15:58
  • 1
    *hashed password has now become into an organic organism which will be unique and difficult to understand* - I'd suggest you read some book or least a couple of articles about basics of security. You will know that strong solutions are clear and very easy to understand to everyone. Making solution hard to understand increases the risk of mistakes at different phases (design, implementation, testing, applying it). Besides, Kerckhoffs's principle says you should expect that the attacker knows all algorithms used. The only secret should be the key, not the algorithm. – mentallurg Apr 18 '21 at 16:04
  • 1
    *Never rely on TLS "secured" connection, these secured connections can always be easily intercepted by hackers without people being aware of it.* - This is impossible. I'd suggest that you explain this your statement or delete it. – mentallurg Apr 18 '21 at 16:06
  • 1
    *People who are saying that hashing password becomes the password have no idea what they are talking about.* - It seems that *you* have not understood that. Following was meant. If authentication means matching the hash that user sent against hash on the server, then attacker that stolen hash database will be able it to login. Thus, knowing hash will be sufficient to login. Effectively, this makes hashes equivalent to passwords in the classical login scheme. – mentallurg Apr 18 '21 at 16:16