14

In a philosophical sense is heterogeneous security, a system where people are given more autonomy, better than security policies/procedures written in stone?

I've worked at some companies where office politics were so strong and everyone was so caught up in following procedures that intelligence was completely abandoned. For example people would still deposit confidential documents in a secure waste disposal even when it was overflowing and anyone who walked by could see and grab a document from just inside the lid of the garbage can. This is really bad because now an attacker would know to go to the overflowing secure disposal bin as he would find valuable information there.

In other words, when is having "strict rules" worse than asking people to be careful? I guess it's linked to bureaucracy, which seems to happen companies when becoming large.

Anecdote: another good one is where I started working the computers are severely locked down. To get access to change anything you need the IT staff to enter their password. The IT staff could not care less about why they are entering the password and will always do it for anything asked.

Celeritas
  • 10,039
  • 22
  • 77
  • 144
  • 5
    If rules are too strict, people start finding ways around them, or even ignore them. – S.L. Barth May 08 '15 at 09:26
  • 10
    To the people voting to close this question as “primarily opinion-based”: there are studies on this topic. Just because it's about people rather than about computers doesn't make something “primarily opinion-based”. This question is answerable with data and experience. – Gilles 'SO- stop being evil' May 08 '15 at 12:13
  • 3
    Here's an example: http://dilbert.com/strip/2007-11-16 – Daniel May 08 '15 at 16:24
  • Please, create a new question and link to this one instead of editing your current question. That makes it difficult for future readers to understand the relationship between existing answers and your question (and also it leads to overly broad questions). – Steve Dodier-Lazaro May 09 '15 at 22:15

3 Answers3

12

What you are pointing is the difference between imposing security rules to people and involving people to get better security.

Chances are that you will find this video quite interesting. After a walk through issues quite similar to the one you mention, the presenter (Jayson E. Street to name him) ends up by talking about positive enforcement. It makes the difference between a few administrative people trying hopelessly to make all the "stupid users" to comply with imposed rules, and the same administrative people considering the other employees as co-workers, or even a "human IDS".

  • If people do not understand the goal of a security measure or do not feel it to be useful, they will never apply it (or at least they will never apply it in a sensible way).
  • If people have the feeling that a security measure prevents them from doing their daily job, they will just workaround it.
  • If people have the feeling that this security only comes in a top-down direction, they will be far from likely to raise any issue or suspicious event.

That's why security literature insists on the role of regular trainings in order to educate people. Security is not here to please management and auditors. Security is here to ensure the safety of the company, of the customers, and ultimately of the employees themselves.

Better security is not achieved by defining always stricter rules. Better security is achieved by getting people involved.

WhiteWinterWolf
  • 19,082
  • 4
  • 58
  • 104
  • The "more training and education" route that is often advertised keeps spectacularly failing to deliver, simply because it does not acknowledge that people can spend limited amounts of cognitive, physical and temporal resources on security, which is by definition not a valuable or productive activity. – Steve Dodier-Lazaro May 08 '15 at 10:37
  • So (roughly quoting A. Sasse here) if you want to make your staff actively spend a certain amount of time on security tasks, make it a formal agreement that security management is part of their job and remunerate the time they spend on that activity. Else, reduce the productivity cost of security first, and then you can think about improving compliance. – Steve Dodier-Lazaro May 08 '15 at 10:39
  • A thing which may matter is the form given to such trainings. A repetitive formal security training given by some unknown face preaching some rules going against people habits has most chances to be felt as just plain boring and lost time. I think an idea to highlight in this conference is the idea of proximity and keeping people's attention up, and this can be achieved only through long term daily activity (local point of contact, checking offices, provide feedback, ...), not with a one-shot formal training. – WhiteWinterWolf May 08 '15 at 12:51
  • True. "Formal" training can be used to address blatantly wrong mental models of security risks, but will not solve the problem of motivating employees to comply or detecting and solving structural issues that prevent employees from complying in the first place. – Steve Dodier-Lazaro May 08 '15 at 12:54
7

There are a lot of complex issues, mainly related to effort and trust dynamics, that undermine security policies in organisations. Whilst individual factors have been uncovered by researchers, there isn't as of yet a single unified theory of what security management styles are preferable or of what policies to implement in every single organisation.

The direct cost of security compliance

First things first, all decent security engineers know that some risks are better ignored than dealt with because the cost of protection would be higher than the loss caused by a breach. Likewise, some policies or mechanisms cost much more to end users / employees than they provide value to organisations, but many organisations have not yet realised this. You can find famous, generic examples of phishing protection or SSL certificates in So long and no thanks for the externalities from Herley.

In organisational security, Beautement et al. explain in The Compliance Budget used qualitative interviews with employees to determine what causes them to comply. Four factors come into play: the costs and benefits of compliance for the employee, and the costs and benefits for the organisation. Essentially, employees make a cost/benefit estimation when they need to decide whether to comply with a security measure or not.

The issue of perception

Evidently, people who have not received training in computer security have lacking mental models of how security works and misestimate the risks they're taking. So rather than actual costs and benefits, employees reason based on their perceptions of costs and benefits, both for themselves and organisations. This means that balancing perceptions is one possible course of action for organisations that are already cost-conscious but meet compliance issues.

Additive costs and the compliance budget

The main idea behind the Compliance Budget model is that employees will tolerate a limited amount of daily effort/cost to themselves in order to fulfill the organisation's benefits. The sources of cost (physical and mental load, embarassment, missed opportunities, and the fact that security competes with more motivating and important tasks) to employees are more numerous and tangible than the individual benefits (avoiding the consequences of a security breach, and the sanctions of being caught bypassing, owing to Beautement et al). If that model is correct, then organisations can act in a number of ways to boost the compliance threshold of employees:

  • increase the perceived benefit to the organisation (e.g. by improving employees' mental models of the consequences of breaches)
  • increase the perceived benefit of compliance to users (e.g. by punishing employees harsher...)
  • decrease the perceived, and actual, costs of compliance to users (which is in itself a very complex topic)

And the hidden costs

Now that the general principles are laid down, there is additional research that explains typical compliance-draining factors, mainly from Bartsch and Sasse in How Users Bypass Access Control and Why. Below is a breakdown of some relevant observations made in the paper.

The cost of correcting and updating policies

CISOs, as all human beings, make mistakes and approximations and can occasionally deploy policies that conflict with the needs of individual workers and prevent work getting done. For instance, a data encryption policy might prevent sales staff from using convenient storage media to go and present their products to remote clients. Furthermore, employee needs change over time as work practices evolve or new projects emerge.

Employees can often need to wait for a long time to get policy changes implemented, which severely undermine their productivity. For instance, teams that welcome interns can end up not being able to provide them with work, unless they bypass access control by sharing credentials or deploying unprotected shared storage medium. The fact that access control is often managed top-down rather than in more decentralised ways can contribute to the existence of delays and productivity deadlocks.

Inter-employee trust vs emotional blackmailing

Ironically, employees who get to manage access control directly for their team (sometimes out of sight of their organisations) can feel an emotional pressure to grant access to their colleagues and subordinates, as they want to avoid grudges in their team. Kirlappos et al extend greatly in the role of employee inter-trust in security decision making in Learning from Shadow Security.

In my opinion, both papers lean towards the idea that security could be collaboratively managed by organisations and employees in positions of making security decisions. In Bartsch and Sasse, the need for organisations to provide high-level security policies that can be implemented in local file-sharing systems has been pointed out, as this can address deficient security mental models and the issue of emotional blackmailing for access.

There are a few suggestions that in-situ, local decision making is more fit in personal security too (Reactive access control, Laissez-faire file sharing).

The above should suffice to explain why rigid security policies are bypassed, and provide options to improve compliance and policy fitness. Albeit there is still a lot of research to be done, there are't any inconsistencies on those topics in the academic literature that would lead me to believe any of these hypotheses, models and results to be incorrect. They're probably the best informed strategy to apply right now.

Steve Dodier-Lazaro
  • 6,798
  • 29
  • 45
0

No organization's objective is "security." The purpose of sensitive information is not its own existence, but its application to organizational objectives. This is to say that any effort in security takes away from real organizational objectives - it's detrimental.

That said, let me follow your question directly. With a question: "Strict" in what way? It sounds as if you are talking confidentiality, but that's only part of the security equation.

There's an old (for IT) yarn about a developing country whose rulers kept their arsenal in a vault with a digital lock, and stored the vault's digital key in a DOS password keeper with its password known to a few. During a rebellion those few were killed and the arsenal couldn't be used, and the regime fell. The weapons were secure but unusable.

You also have to think about availability and integrity, that is, the entire "security triad." There's an intrinsic tension between them - you can't max all of them.

I appreciate the answers citing training because sensitive organizational data is used by people in their work roles, meaning those people are making frequent decisions about handling it. However, you also shouldn't be presenting them with complex or trade-off security decisions lest you cripple them in actually using the information to accomplish their work. I know security consultants make more money helping companies create lengthy and ambiguous security policies, and still more training the employees in the anxious and mind-numbing practice of applying those policies, but I think that's misconduct on their part and on the part of the organizational managers who put such policies into force.

Andrew Wolfe
  • 101
  • 2