39

Who is responsible for a user's password's strength? Is it us (developers, architects, etc.) or the user?

As a web developer, I've frequently wondered whether I should enforce the minimal password strength on my websites/applications users.

I frequently meet regular people (not geeks or IT professionals) that hate it when they are required to mix lower and upper case letters, add numbers, not to mention other characters in their passwords.

I'd prefer to measure passwords strength and only inform the user about it rather than mandate it. This way, those who don't like adding numbers or mixing cases would be informed, yet allowed to use weak passwords.

I'd still put a minimum length requirement, but should I enforce further minimal requirements?

If a user, who has a very simple one word password, has his account hacked in a dictionary attack, is it his fault or is it our fault that we allowed for such a simple password?

Jeff Ferland
  • 38,090
  • 9
  • 93
  • 171
Michal M
  • 539
  • 4
  • 7
  • 3
    If I have to write down my password in order to remember it due to your password rules, that might be a good reason to use simpler rules such as a simple length requirement. – Joe Phillips Aug 22 '11 at 19:33
  • 2
    A password only needs strength appropriate for the information it is protecting. Though password strength is important in systems protecting valuable information, in systems protecting low value (to a potential attacker) information, the strength is not as important. – this.josh Aug 23 '11 at 01:53
  • 5
    I can't believe someone didn't add this already http://xkcd.com/936/ – boatcoder Aug 23 '11 at 11:57
  • 1
    @Mark0978 - That's covered in another topic. http://security.stackexchange.com/questions/6095/xkcd-936-short-complex-password-or-long-dictionary-passphrase – Iszi Aug 23 '11 at 12:56
  • @Mark0978 - as Iszi already mentioned, there's a separate thread on it and that exact thread triggered my topic. – Michal M Aug 23 '11 at 13:17
  • Password length has a far greater impact on security than the old-think "password strength" rules like lower and upper case, numbers and punctuation. These days, I think I would tend to only require length, and make the length at least 12 to 15 characters. It's easy enough to remember a long pass *phrase* without a lot of upper/lower/digits/punctuation mumbo-jumbo. Of course the users will still gripe about the length requirement, but you can't please everybody. :) – Craig Tullis Apr 02 '14 at 00:17

8 Answers8

28

You cannot answer this question without answering the following question

If a user's password is compromised, does that only put that user/that user's data at risk, or does it also put other users at risk?

If the system does not compartmentalize accounts, then a user cannot keep their data safe by choosing appropriate credentials, so administrators must be responsible, in part, for the overall security of the community.

If the system does compartmentalize accounts, then a user can only harm themselves by choosing weak credentials, so shifting responsibility to the user can be appropriate.

For example, email mailboxes are well compartmentalized; compromising one account does not compromise others much. Some accounts might be set up to not treat as spam mail from other accounts, but there are many other ways to impersonate email senders so the ability to send email as a user is not often a huge escalation of authority. An administrator can abdicate responsibility for password strength without compromising the security of the system.

But file systems and source code repositories are not well compartmentalized. Compromising one account whose ~/bin directory is on other users' PATHs, or which has commit access to a vital source code repository can lead to the compromise of many other accounts and systems via trojans. In this case, compromising one account escalates authority system-wide; administrators cannot abdicate responsibility for password strength because they cannot abdicate responsibility for system health.

Mike Samuel
  • 3,873
  • 17
  • 25
  • 2
    Interesting points to think about. It seems to conform to the rule that your security should be proportional the the value of the assets protected by that security. – this.josh Aug 23 '11 at 01:31
  • @this.josh, Yep. The value of a credential is proportional to the total authority it provides, both that invested in the credential holder, and in systems that delegate authority to the credential holder. In the PATH case, the problem arises because a shell-user delegates (via `execvp`) authority to files in a compromised directory, and in the code repo case, the exploit occurs because critical systems delegate (via code loaders) authority to compromised source files. – Mike Samuel Aug 23 '11 at 01:45
  • 2
    I think what you are describing is the lack of intuitive fine resolution mechanisms to protect information that is shared unequally amongst people. Current popular mechanisms have course resolution and boundries convenient to computer systems instead of people. Those boundries are file, directory, hardware component, process, user account, computer system, subnet, subdomain, domain, etc. – this.josh Aug 23 '11 at 08:17
  • @this.josh, There are fine-grained mechanisms available in the security literature. For example, [ocaps](http://en.wikipedia.org/wiki/Object-capability_model) provides trust boundaries at the object level. – Mike Samuel Aug 23 '11 at 16:24
13

If you are a developer, you should not be the one setting requirements - this should be set in policy by the organisation you are developing for, however if you are security aware you should be able to discuss what you think a minimum baseline should be. In fact you may be able to use security of your application as an additional selling point over the competition in the current environment of fear over being hacked.

But it should absolutely not be your responsibility - the organisational policy should define password requirements as per their risk appetite, and individuals can always use a stronger password than the minimum required.

Rory Alsop
  • 61,367
  • 12
  • 115
  • 320
  • 2
    "risk appetite" is the key phrase. So if you're writing shrink-wrap software, you have almost no right to set the policy. –  Aug 22 '11 at 18:37
  • +1. Great explanation from an organizational responsibility point of view. My post complements this by arguing about what such a policy *should* take into account. – Mike Samuel Aug 22 '11 at 20:51
  • 1
    Most organizations do not have sufficient security resources, especially if they do not have legally mandated requirements. Smaller organizations oftem don't realized that they have legally mandated security requirements. If after making appropriate requests and finding no security resources, I highly encourage developers to design and document as formally as possible their password designs. – this.josh Aug 23 '11 at 01:29
  • It really depends upon the organization. Smaller organizations may not have any idea of what is required, and it is the duty of the developer/architect to explain the security implications to the business process guys. – Andrew Neely Aug 23 '11 at 11:31
  • +1 for the organizational responsibility viewpoint. However, my reading of the intent of the OP was not really about `who decides`, rather `who enforces`. I.e. if there is a corporate policy, is it the responsibility of the developers (programmers+architects+etc) to *enforce* that policy, right in the application; or is it the responsibility of the users to *comply* with it. – AviD Sep 14 '11 at 10:36
  • good point - it is the responsibility of the corporation to define the enforcement requirements needed in the app, and to engage the developers in implementing this. – Rory Alsop Sep 14 '11 at 11:18
5

Insofar as you wish strong passwords to protect your server/service, it is your "fault" if your policies are too lenient. That assumes that your server/service is threatened by an individual end user compromise, which is not always the case.

Insofar as your users wish for their own account to be protected, it is their "fault" if they do not choose a sufficiently strong password, regardless of the policies in place.

If enough accounts are compromised, then "fault" is determined by who spins their view to the media faster and more effectively.

It's really an area where you can lead a horse to water, but you can't make them drink. Draconian password policies usually cause security to leak in some other direction, such as writing passwords down or relying on guessable non-alpha sequences such as birth dates. Missing password policies lead to weakest link problems, as someone will always pick 'secret' as their password given the choice.

"fault" is a pretty nebulous concept here. You could ask about legal liability, which is firmer. I believe there's been a case or two involving bank passwords that were compromised the the bank was blamed for not requiring sufficient protection, but I don't recall the results.

Actually, there's a fascinating example that I saw last week. A non-profit lost $70k when someone got passwords for their banking and leveraged them.

“We had declined some of the security measures offered to us, [but if] we had those in place this wouldn’t have happened to us,” French said. “We thought that would be administratively burdensome, and I was more worried about internal stuff, not somebody hacking into our systems.”

and then

MECA has since added more security features to its online banking account, and access to that account is only possible through a locked-down, dedicated computer.

“All of this is a day late and a dollar short, I guess,” French said. “Why isn’t someone out shouting on the rooftops about this fraud? People need to understand how exposed they are.”

That illustrates peoples' attitudes toward password security right there. "We declined to strengthen our security... why didn't someone tell us to strengthen our security?!?" How far are they from suing their bank for not requiring better security?

gowenfawr
  • 71,975
  • 17
  • 161
  • 198
  • 7
    Given my bank has password in-security rules. (max 12 chars, no special chars allowed) I would most certainly hold them responsible if my account were compromised by a password attack! – Affe Aug 22 '11 at 17:38
  • Yes, you make a good point - if imposed MAXIMUM limits cause weaker security, then the vendor clearly is at fault on some level. – gowenfawr Aug 22 '11 at 17:55
  • I think you are refering to the case of [Patco vs. Ocean Bank](http://www.med.uscourts.gov/opinions/rich/2011/jhr_05272011_2-09cv503_patco_v_ocean_bank.pdf) fraud lawsuit – this.josh Aug 23 '11 at 01:45
  • @this.josh - I don't think gowenfawr is referring to Patco v. Ocean Bank, although that's another interesting (but very different) case example to look at. In Patco v. Ocean Bank, the issue isn't about weak or strong passwords. It's about a very weak, so-called "multi-factor" authentication system. – Iszi Aug 23 '11 at 12:54
5

It seems to me that there's two separate issues that need to be covered in the question.

Users choosing weak passwords.

The responsibility for this piece can be conceivably split three ways:

  1. The Management and IT Security offices within the organization need to define appropriate policies for password strength, refresh, and reuse.
  2. The Developers need to properly implement technical restrictions which enforce these policies.
  3. The Users need to follow the spirit of these policies, not just the "letter of the law", in order to create "strong" passwords.

In the end, responsibility for the "strength" of the password absolutely comes down to the User. (Though the law may judge differently - I'm not a lawyer, so I can't speak for such.) Still, it would be irresponsible of us (IT Security, Management, Developers, etc.) to not establish and enforce appropriate password policies, and educate the Users on the necessity and proper implementation of these policies.

Password database compromise.

To fully answer the question:

"If a user who has a very simple one word password has his account hacked in a dictionary attack is it his fault or is it our fault that we allowed for such a simple password?"

We also need to address how the dictionary attack became possible in the first place. That is, who was responsible for protecting the password database? This falls into two groups, and - if the media is a good measure - probably reflects who is most likely to be blamed at least publicly, if not legally, for accounts being hijacked.

  1. The Management and IT Security offices within the organization need to define appropriate policies regarding the maintenance and security of the organization's servers.
  2. The Developers need to properly implement these policies both in technical application and in general practice.

If the password database is exposed to an attacker, passwords can be cracked regardless of strength. It's only a matter of how much time and computational power the attacker is willing to devote to the task. It is entirely the responsibility of the organization (IT Security, Management, and Developers) to perform due diligence and give their best effort in preventing this.

Iszi
  • 26,997
  • 18
  • 98
  • 163
  • +1 from me for answering both bits of the question. – Rory Alsop Aug 22 '11 at 19:03
  • -1 I don't see how this makes anyone more secure. – rook Aug 23 '11 at 00:54
  • Good response to both questions and excelent description of division of responsibility comisserate with duties and functions in the security system. – this.josh Aug 23 '11 at 01:50
  • 1
    Just to nitpick a bit, a dictionary attack doesnt necessarily require a database compromise... but it would be an authentication mechanism failure, for allowing repeated login attempts... – AviD Sep 14 '11 at 10:46
4

If you're a web developer, working alone creating a site, you decide how boring it will be be when someone has his account compromised and blames your site as non-secure.

If you work in a company, unless you're in the big committee that decides about security, you shouldn't bother.

If you're in charge (even partially) about security, you have to discuss that users are the only responsible for their passwords, the same way users are responsible for not loosing their wallets, or not making an outdoor with their bank account numbers and pin passwords. But... they will blame your site if their account is used, and only will think about password security being your responsibility. Gawker's leak was a very good example on how very poorly people choose their password.

The best idea I've read so far, in a different question, was this one: (just notice that I haven't seem any site using it):

  • put a password strength meter in the password. Instead of using "weak - medium - good" notation, make it have more levels, like 1-10 measure

  • indicate (and implement) a policy that the user will have to change the password every x days with that strength. If it's very low, indicate that he can stay more time with the password if he strengthened a bit more.

  • if he stills choose a low-quality password, make him click any "I agree" check.

bstpierre
  • 4,868
  • 1
  • 21
  • 34
woliveirajr
  • 4,462
  • 2
  • 17
  • 26
  • I take offense to the statement about gawker's security leak. There inability to keep their servers secure was their fault, the fact some of their users used weak passwords, could have been avoided by forcing their users to use a secure password. I should add the reason gwaker media was even compromised was because of their own weak passwords not their user's weak passwords. – Ramhound Aug 22 '11 at 17:19
  • I just mentioned gawker to show that (a) users have weak passwords - the leak was better than any study I have seen so far about "which passwords do users use" and (b) gawkers admin are, at some end, users. And without something preventing them from using bad passwords, they used -guess what? - bad-weak-simple passwords. – woliveirajr Aug 22 '11 at 17:28
  • You don't answer the question asked. – this.josh Aug 23 '11 at 01:47
  • @this.josh: there are 4 questions: (1)who's responsible, (2)should I enforce, (3)should I put more than the minimal requirements and (4)who's fault if a dic attack succeed. I answered 1 and told how to deal with 2,3 and 4. And didn't saw your answer... – woliveirajr Aug 23 '11 at 11:55
  • Your answer appears to say that only the user is responsible, is that accurate? You describe your method but do not explain how it compares to existing methods. I don't think that my not having an answer to the question is relevant to your answer. – this.josh Aug 23 '11 at 18:01
  • @this.josh: nop, read it again. I say that who's interested in the security is the one responsible for the security. If you don't want to have users saying that your site is weak, prevent them from using bad passwords in your site. Because they will (a) use weak passwords and (b) complain your site is insecure. – woliveirajr Aug 23 '11 at 18:09
  • @this.josh: by your comment, I stoped thinking where I implied that user was responsable. It might have been the 3rd paragraph. And there's a big "but" there, warning that even if users are responsable (and yes, I think they are the onlye responsable for choosing passwords - otherwise, just give them a password and urge them to memorize it), you'll get consequences of their bad choices. – woliveirajr Aug 23 '11 at 18:16
  • Interesting suggestion about the forced-password-change policy, but I wonder if it would really work to make things more secure or if it just creates more problems. People might choose especially weak passwords such as the week's number, or use a secure one and write it down, to name two things off the top of my head. – Luc Nov 01 '12 at 11:36
1

The developer because whether it's your fault or not they'll blame you when their account gets compromised.

Inverted Llama
  • 553
  • 2
  • 10
  • 1
    With that kind of logic I could blame the state for driving against a tree. _Who put that tree there! Who put that road so close to the tree!_ Also your answer is rather short and subjective. – Luc Nov 01 '12 at 11:28
  • The tree in that example isn't paying you. It's irrelevant whether a security issue is your fault or the client's if the end result is that the client no longer buys your product. – Inverted Llama Nov 02 '12 at 09:43
  • I'm not paying to login at Stack Exchange. But yeah, there is a difference between "someone's password got hacked" (which might be the user's fault) and "mass hacking accounts" (which might be the developer's fault). Others have already pointed that out though. – Luc Nov 02 '12 at 17:02
  • You are paying just not with cash, you are giving SE something of value for the service. If you decide to give this to another company because your account got hacked it doesn't really matter whether it's your PW or SE's server which are at fault, either way SE is no longer getting this value from you, SE is the one losing out. – Inverted Llama Nov 05 '12 at 10:28
  • Hm that's actually a good point. I think if you elaborate this in your answer others will upvote too. – Luc Nov 05 '12 at 12:13
1

Who is responsible is who will get the blame. It is a policy decision, so it can be "anybody". However, bad decisions can happen. Here is some food for thought:

  • Developers themselves must not be responsible for design decisions, because this would mean that either they make design decisions (in which case they are designers, a different job which they may or may not perform in parallel), or, worse, that they get blamed for the mistakes of others (not the best way to keep your developers around...).

  • It is not possible to measure the strength of a password. Password strength is a characteristic of the password generation process, not of the password itself. To a small extent, you might infer some data about that process by observing a lot of passwords chosen by a given user, but this is not practical. "Password meters" do not give very useful results; e.g. this site will rate "BillClinton1992" at "100%", the highest possible score... Since password strength checks cannot be reliably automated, it does not make sense to make software designers responsible for password strength.

  • What the applications designers can be made responsible for is what they can do, i.e. offer ways to implement password policies. They should not edict the policies themselves (that's the job of a security architect) but they can implement (or rather, have developers implement) some tools such as password length constraints or password generators.

  • Blaming users is fruitless. For proper password security, users must be enlisted; you will get nowhere without their cooperation. But users are not only users, they are also human beings; as such, they do not like at all to be bossed around, and tend to react adversely to constraints. If you try to enforce "strict password rules" with a tool, then they will creatively invent ways to cope with these rules, with minimal effort on their part, and not necessarily for the best when it comes to security. For instance, they will systematically append "1A." to their password (which is otherwise an all-lowercase common word) so that they pass the tests of "need digits, need symbols, need uppercase letters"). Or they will write passwords down, which is not bad when they do it on a paper which they keep in their wallet, but quite horrific when the paper is a stick-up note concealed under the keyboard.

A good policy is to enforce password generation through a computer tool. Computers are good at randomness (as long as they use /dev/urandom, of course). The tool incarnates the password generation process, so you can know "how strong" such passwords are. Since users will not like the effort of remembering passwords, inform them that they are allowed to write down the password on paper pieces (e.g. business cards), as long as they keep them in a safe place. In that way, users are not responsible for the password strength, but they are responsible for maintaining physical security of their wallet. And, more importantly, make it known that users who lost their password will not be blamed because the one thing which is worse than a stolen password is an unreported stolen password.

Even if you prefer the more traditional solution of letting users choose their passwords, you should still offer a password generator and promote its use.

To help users remember passwords, have them type the password daily. Not too often: nobody likes to type a 15-letter password every ten minutes. But if they enter the password every morning, they will remember it, and this make it easier to enforce automatically generated passwords. Please, oh please, do not make regular password changes mandatory. This has only very minor security benefits, but it also irks users and promotes anti-security behaviour (e.g. password sharing, which users tend to do as a protection against loss of password through forgetfulness).

Thomas Pornin
  • 320,799
  • 57
  • 780
  • 949
0

I'm going to assert that software developers are not free of responsibility on this count.

If you're the developer, does it seem fair to assert that it is at least your responsibility to provide a solid mechanism for enforcing the policy chosen by your customer?

What should the default policy be for non-savvy customers? In other words, for most of your customers?

Surely the default should not be to leave a system open out of the box.

I still see cases in pretty recent history (as in earlier today) of, say, database server vendors setting the system admin user to "dba" and the password to "sql" (wonder who?).

"Big deal, it's up to the customer to change that" you say?

The problem is that having that kind of laughable default level of security leads directly to at least some ISV's shipping product that not only doesn't change the default credentials, but relies on the weak, well-known default credentials, and to top it off, publishes that reliance in official product support documentation in black and white indelible ink.

I could easily name multiple products, but I won't. Just trust me, it's true. It used to be true of SQL Server with its built-in "sa" user account back in the day, but Microsoft has had that hole in the installation patched for a while now. But it's still true elsewhere.

I am perfectly aware of systems running in companies far and wide right now today, containing what is by any measure very sensitive data, with application front ends that implement access controls in-app giving the illusion of some security, but which are in fact utterly wide open for full-on administrative access to any vaguely savvy user with a copy of Excel and access to the software's install manual, or any clue about the back-end database and a proclivity to experiment.

Craig Tullis
  • 1,483
  • 10
  • 13