20

When writing security policies do you take into consideration user's (could be vendors) ability/capability to fulfil certain mandates in the policies or do you strictly want them to enforce, no matter what?

I ask this question because I am hit with this dilemma as some of the statements crafted are a bit stringent (For example, setting minimum password length of 12 or more, etc.) and I am afraid that the one who enforce my policy statements would be unable to achieve at this point in time (they could be small vendors/users that doesn't have very good security setups etc.).

Update:

I am sorry for the confusion. My policy may (or may not) include a part on outsourcing at this point in time, as it is only a draft. The question I asked is about whether we create a policy based on what the users/vendors can achieve or we basically don't give a hoot and make it a strict policy to enforce, because after all, it is security we are talking about. And if it's a critical system, then all the more we should allow no exception. Practically a "all must do it, i don't care. " mentality. Who is going to get blamed when things happen? The security team right?

WhiteWinterWolf
  • 19,082
  • 4
  • 58
  • 104
Pang Ser Lark
  • 1,929
  • 2
  • 16
  • 26
  • Are you asking about ability of your users to comply? or asking about the ability of 3rd party vendors (who you are selecting among) to comply? or something else? Your first sentence suggests you are thinking of "users", but your final parenthetical suggests "vendors". It would help if you could edit the question to be clearer about this, as the answer might depend upon which you're asking about. – D.W. Jan 04 '16 at 20:37
  • I have edited. The policy could apply to vendors or our own users. – Pang Ser Lark Jan 05 '16 at 00:29
  • Don't ever ask something that can't be enforced by the vendors or even reasonably followed by the end users. It will be a ground cause for your entire security system to be bypassed / not adopted on the basis that things do not to get done with the affected computers. This is a known tenet in research on the topic, widely covered by the work done on organisational security and on the usable security industry. – Steve Dodier-Lazaro Jan 05 '16 at 00:59
  • 1
    @SteveDL: I think (not sure) that in talking about small vendors the questioner has in mind requirements that can be met by some but not all of those potentially subject to the policy. Naturally one can rule out a requirement that *nobody* could meet, and might include a requirement that some but not all potential vendors/users can meet. But of course those who can't meet it don't get to be your vendors/users. We don't know whether the questioner is like Facebook (must have all users) or like the military (happy to exclude vendors for something so trivial as being a foreign spy). – Steve Jessop Jan 05 '16 at 01:39
  • Hi Steve DL and Steve Jessop. We are not so hard on vendors to say that we would not work with them if they cannot meet my standards. If they don't meet, i might probably find out why they can't, and try to find ways to help them to meet our standards. That should be first step in what we will do. – Pang Ser Lark Jan 05 '16 at 04:38
  • Hi WhiteWinterWolf, sorry about that. I did not know. – Pang Ser Lark Jan 05 '16 at 09:48
  • (Just as a side note, I'm voting against closing the question. If we can pin down a specific organisational context I could cite relevant research and give pros and cons of the different attitudes @PangSerLark can adopt towards the vendors) – Steve Dodier-Lazaro Jan 05 '16 at 12:20
  • What about setting the policy where you'd like it, with a statement giving someone in your position the ability to grant exceptions to the policy? – Scott Bevington Jan 05 '16 at 14:13
  • Can someone edit in a less vague question title? – Stevoisiak Jul 14 '17 at 16:03

5 Answers5

33

To quote AviD on this:

Security at the expense of usability comes at the expense of security

If you make it too hard to fulfill a security policy, people will either ignore it or look for loopholes and workarounds which fulfill it to the letter but not to the spirit. So you will reach the opposite of what you intended and weaken security.

A security policy therefor should:

  • Have the goal to make people sensitive for problems instead of enforcing solutions.
  • Encourage people to take responsibility for maintaining good security instead of blindly following instructions.
  • Be mostly SHOULD policies and not MUST policies, so people can diverge when they have good reason.
  • Find a reasonable compromise between security and usability.

Anecdote: I friend of mine worked in a team maintaining a system with a very stringent password policy. Not only did it mandate regular password changes, a minimum length and the use of numbers, lower-case, upper-case and special characters, it also had a long set of rules which had the intention to prevent passwords from being too similar to previous passwords. The result was that you usually could find papers in the bins or littered around the office where people tried to construct passwords to fit these stringent policies. Thanks to the password policy it would have been ridiculously easy to obtain passwords through dumpster diving.

Philipp
  • 48,867
  • 8
  • 127
  • 157
  • 17
    Just create a policy forbidding dumpster diving. :) – CodesInChaos Jan 04 '16 at 16:43
  • 2
    The OP seems to be talking about a technical ability of 3rd party vendors to comply with the policy - not an individual's willingness to comply. – schroeder Jan 04 '16 at 17:25
  • 3
    I'm not sure the anecdote is all that relevant to the question but IBM has the option on some systems to require that your new password doesn't have 'any' of the same letters in the same position as in any of your previous passwords. Besides being a really stupid idea (eventually you run out of possible passwords), I think it might mean that the clear text password is being stored somewhere (let's hope it's encrypted.) – JimmyJames Jan 04 '16 at 17:42
  • I am sorry for the confusion. My policy may (or may not) include a part on outsourcing at this point in time, as it is only a draft. The question I asked is about whether we create a policy based on what the users/vendors can achieve or we basically don't give a hoot and make it a strict policy to enforce, because after all, it is security we are talking about. And if it's a critical system, then all the more no exception. – Pang Ser Lark Jan 05 '16 at 00:34
  • @CodesInChaos: the policy preventing dumpster diving (aka secure document disposal) almost but doesn't quite solve the problem, since these heinous passwords are also taped to the monitors... – Steve Jessop Jan 05 '16 at 01:07
7

Some advice from someone who has written many a policy:

Rule #1.

This is the most important rule. In fact its a requirement to start any other process. If you management team is not going to back the policies, it is not worth your time, the companies time, let alone users time to enact anything. I'm talking about your C-level, if they are not on board, stop now. Save your mental anguish.

Policy, Standard, Procedure, Guideline

There is a drastic difference between them, and you should read up on it, but in short:

  • Policy - Why do we need this?
  • Standard - What is needed for this?
  • Procedure - How do we execute this?
  • Guidelines - Best practices, Important footnotes, etc.

Policies are Goals

Policies are what you strive for in your company. Situations arise however where you cannot always meet the goal of your polices. We call these exceptions, and there are usually teams who analyze the risk of not meeting the requirements. Some business can exception away into a pointless Security Policy (Very Bad) other companies are so heavily regulated (think banking) where failure to meet the policy can mean great impact to the companies operations.

Users are Important ............ ish

The larger the organization, the more users, the more opinions, the more harder to change. At a certain level changes are enacted by management with help of employees. Smaller organizations, it may be enacted by employee's with management approval. This sounds similar, but it can be very different for interactions.

Information Security has a rather tough time with policy for the simple fact that we are normally forcing other groups to comply with our mandates. In Large companies, forcing a policy such as password requirements, is not something IS is responsible for implementing. It's normally an OPS team, or application owner.

Not everyone needs a policy but it impacts them

Jenny in Accounting doesn't care that Frank in IT is required by Paul in Information Security to make sure the server is running TLS or to implement a minimum password on the Accounting Application. Jenny will care if you have no policy and magically one day Jenny's password of 1234 is required to be 8 Alpha Numeric characters. This is not policy, this is impact and you may or may not be part of dealing with it. Simple note, HR is your friend.

Shane Andrie
  • 3,780
  • 1
  • 13
  • 16
6

You basically have two choices here: 1) you consider capability and have a process for exceptions or 2) you refuse to work with anyone who cannot implement your policies.

If you don't do 1) and can't do 2) it means your policies exist on paper only. If you don't have the power to make 2) happen, you are left with considering capabilities. I would say, though, you need to have some ability to be strict. If a vendor can't do a 12 char minimum, you might be able to accept that given other mitigation strategies but you wouldn't want to accommodate really egregious issues e.g. a hardcoded admin password.

JimmyJames
  • 2,956
  • 2
  • 16
  • 25
  • 2
    I agree. Set the policy that manages the risks to your organization and implement an exception process so that you can manage situations where the policy cannot be followed. An exception policy allows you to manage the 1-off risks when they arise. – schroeder Jan 04 '16 at 17:27
  • Thanks. Is the exception policy usually a separate document? or included in the policy as an appendix/annex?? – Pang Ser Lark Jan 05 '16 at 00:38
  • 3
    Exceptions would be granted on a case-by-case basis so it wouldn't be in the policy. You should document that exceptions can be granted and what the process for granting them would be. – JimmyJames Jan 05 '16 at 16:26
2

One thing that you might be able to do depending on what policies you go for is make your policies complement each other, where the implementation of one policy becomes easier and/or more secure if you also implement another policy.

For example: you give the example policy of password requirements. If you alongside that policy also have a policy of requiring your users to use a password manager that can take care of making and storing a proper password in a secure manner, your users will be less inclined to game the system, because not gaming the system is easier.

Another example: if you require your users to use TLS on their frontend servers, some of them might do a shoddy implementation with expired or insecure certificates. A complementary policy would be to use Let's Encrypt to make certificate handling easier.

Something you could use simultaneously is encouraging people to implement policies properly through monetary incentives. I'm not talking about handing out fees or fines to those that do a poor job, but more about giving a small discount (like 1% or something) on things like license or support fees based on certain quantifiable and fair goals. For example: if the website your TLS-requiring product is running on gets an A on Qualys (which is trivial to check), they get a 1% discount on their next license renewal (unless you're the one that maintains the website, of course).

So you write your policy in such a way that segments reinforce each other and reward those that go above and beyond the call of duty.

Nzall
  • 7,313
  • 6
  • 29
  • 45
2

The answer depends on why you're even thinking of setting a 12-character limit.

If it's because the system is known to be trivial to attack with 11-character passwords, and proven to be computationally infeasible to attack with 12-character passwords and the foreseeable compute resources of the planet, then you make it an absolute requirement. Anyone who can't or won't implement that limit simply doesn't meet your security policy, and that's that. The consequences for them might be really bad (they can't sell their product), they might be manageable or absolutely insignificant (they use a different system that relies for its security on something other than 12-character limits and therefore has a different policy), but those consequences are justified.

If you set a 12-character limit because you have no idea what limit you should set, and 6 is kind of OK but not enough forever so you doubled it, then indeed you should take account of what is practical to implement. Presumably you do want people to use your policy (in preference to: if they're an employee getting a job that doesn't drive them insane with impossible demands; if they're the CEO flat ignoring it and buying something that meets no part of your policy; if they're a vendor finding another customer or doubling their prices to cover the additional effort). So if the policy contains arbitrary and onerous requirements, then you're unnecessarily risking those things happening.

In practice you're somewhere in between those two extremes. But once you know why you're making the requirement, you can decide:

  • it's a hard requirement. Anything that doesn't satisfy it doesn't accord with the policy.
  • it's a recommendation that you believe to be achievable, and you explain the consequences of not satisfying it.
  • it's half-baked. You need to do more work to establish what the requirement should be before setting policy.

Consider also the goal of the policy. If the object of the exercise is to ensure that you don't use vendors with poor or average security setups, then excluding vendors who don't have good security setups, whether because of their small size or otherwise, is an advantage of a requirement that can only be met by those with good security. The policy is there to guide people, it's also there to exclude people who can't meet it.

Who is going to get blamed when things happen? The security team right?

Yes, and there are two ways your policy can fail and you'll get blamed. It can include someone who, with hindsight, you realise it should have excluded, and you get blamed for a security breach. It can exclude someone who, with hindsight, you realise it should have included, and you get blamed for obstructing the business of the company. A critical system cuts both ways -- you don't want it to be insecure, and you also don't want it to never get built because it's too hard to meet the policy. Your job is to find something that's both adequate and achievable (or, if you can't do that, at least to explain why not so that someone else can get involved in changing the constraints you're working with).

A massive security breach is bad, but if it were inherently worse than insolvency companies would "play safe" by (for example) never attaching anything on the internet in the first place, and all go bust. For obvious reasons this is not what they choose to do. So the provisions of the policy need to be justified, and the justifications need to be available for review later, such that even if something goes wrong they will still seem like reasonably good decisions given what was known at the time you made them.

Steve Jessop
  • 2,008
  • 10
  • 14