1

Suppose I sign up for website.com with username "John" and password "Secret".

Currently the webbrowser supplies website.com with my real plain text password, and we must trust them to salt and hash it properly so that if they are hacked, damage to users is minimized.

Why don't web browsers hash and salt your password for you? What would the downsides be if instead, it communicated:

username: John
password: Sha256("website.com|john|Secret") => 
"655cd29ded358433da16867b682c21621664d26b9ca493ab224488dffce17050"

Maybe it's not the best scheme in the world, but is it worse than nothing at all?

With this scheme websites would have to keep track of which domain you signed up under, and you would probably want to modify the username to be all lowercase in the hash function so that the web browser communicates the same password no matter how you case your username.

The reason I suggest including domain or some other company id in the hash is so that rainbow tables can't be used for more than one site at a time.

--

Update: my question refers to web browser implementation of client side hashing, not one-off implementations in javascript. Relying on a one-off implementation of client side hashing is similar to relying on server side hashing -- that is -- it is hard to be sure if a given website/company/implementation is not leaking your plain text password. This is about shifting the burden to the major web browser vendors.

William
  • 111
  • 3
  • 9
    Possible duplicate of [https security - should password be hashed server-side or client-side?](https://security.stackexchange.com/questions/8596/https-security-should-password-be-hashed-server-side-or-client-side) I realize your question is slightly different, but it seems to address the core topic. – PwdRsch Jul 17 '19 at 18:02
  • 1
    *"Relying on a one-off implementation of client side hashing is similar to relying on server side hashing -- that is -- it is hard to be sure if a given website/company/implementation is not leaking your plain text password. This is about shifting the burden to the major web browser vendors."* - I think you are missing the main point regarding client side hashing: it is not about leaking the password due to some bad script but it is about having essentially the hash being the password - which still needs to be properly protected server side Insofar I consider this still a duplicate. – Steffen Ullrich Jul 17 '19 at 19:15
  • @SteffenUllrich i think this is different.Here the hash would be different for each website _Sha256("website.com|john|Secret")_ – yeah_well Jul 17 '19 at 19:36
  • 1
    @VipulNair: Knowledge of the hash is still enough for the attacker. Note that the attacker does not need to use a browser with this feature to access the site - he could simply send the hash directly inside a HTTP request. *"Here the hash would be different for each website"* - this could be the case with a script served by the site for client-side hashing too. The general discussion of client-side vs server-side hashing (or use of both) is still the same so I still see this as duplicate. – Steffen Ullrich Jul 17 '19 at 20:58

3 Answers3

2

What are the downsides? None, really, aside from significantly increased complexity1, which is rarely a good thing, as well as high migration costs for literally every website with a password field, which is also not a good thing.

But let's consider the implications anyway, to see if they outweigh the costs. There are two ways you could handle this hash:

  1. Don't hash it on the server side. Now a database leak immediately gives attackers access to everyone's accounts, because they just pass the hash in like they'd have passed the leaked password in before.
  2. Hash it (again) on the server side. This probably won't hurt security, unless your hash functions happen to interact in extremely weird and unlikely ways. But now everything is basically the same; the only difference is that instead of directly knowing your password, they only know the browser's hash of your password.

In neither case do you gain any advantage. #1 is obviously wrong when you consider malicious clients, and #2 only protects you from attackers who sniff your password and reuse it elsewhere. But that shouldn't even be possible, because you shouldn't be reusing passwords at all. There are so many other attack vectors -- keyloggers, phishing sites, and so on -- that it's a good precaution to take no matter how your password is handled by the websites you're a part of.


1: Some questions any implementation will need to answer include:

  • Where to store salts (or, if they're not stored -- as seems necessary -- how to generate them in a way that can't be trivially parallelized to defeat the entire purpose of salts)
  • What hash algorithm (and parameters) to use
  • How to handle it when hash algorithms weaken over time and need to be replaced
  • Where to actually implement the in-browser hashing step, which seems simple until you actually sit down and think about it
  • How to securely handle this cross-platform, so that the same user entering the same password sends the same hash to the website
  • How to provide password fields without hashing, since that's often useful for entering sensitive data like SSNs or credit cards, and needs to be known in plaintext by the webpage, in a way that bad websites won't abuse to keep their original bad code

It's incredibly complicated, and all that complication adds plenty of room for bugs. In security code, that's especially dangerous.

Nic
  • 1,806
  • 14
  • 22
1

Because of legacy. It is really possible to implement https://en.wikipedia.org/wiki/Secure_Remote_Password_protocol and build it into browsers with some modifications adapting it to the Web. But the industry prefers centralised solutions such as WebAuthn with TEEs because this way it is more harmful for users' privacy and security and more beneficial for businesses selling devices.

KOLANICH
  • 892
  • 6
  • 14
  • 2
    Or [don't use a hacky, kinda-broken protocol and just use OPAQUE instead](https://blog.cryptographyengineering.com/should-you-use-srp/)? And I _seriously_ doubt browser-makers' reluctance is some grand conspiracy to sell devices, when there are much simpler explanations like SRP being fundamentally bad, there being no clean way to do this generically, and the incredibly low ROI when they could spend their time on actually useful things. Then again, I dunno, maybe Mozilla secretly does want you to buy their phone-- oh, hrm. – Nic Jul 17 '19 at 23:14
  • Admittedly my crypto knowledge is limited, but last time I looked at zero-knowledge it was a hard problem - particularly for web. But I think I mostly disagree with centralized solutions being harmful (unless I'm misunderstanding, so please correct me). Take an enterprise. 50+ properties. Those properties don't need user passwords. The centralized solution can do OAuth or SAML, and only one property, the IdP, needs to know the user's password, reducing the landscape that it becomes available for compromise. – h4ckNinja Jul 18 '19 at 02:03
1

This is a reasonable idea, but there are some problems with coordinating massive centralized efforts. KOLANICH already suggested SRP, very nifty and even more secure protocol for remote passwords.

The main problem is that very few people actually care about security to force whole IT industry to change what's already working. Meanwhile privacy freaks can just use password managers with long random passwords to achieve the same result.

You can ask many similar questions, for example why DNS, ARP and SMTP are unencrypted and unauthenticated? Because it works and nobody cares.

Andrew Morozko
  • 1,759
  • 7
  • 10