36

I did a quick google before asking this, and came up with the following article, linked to from Schneier's blog back in 2005. It doesn't really answer my question though.

As society has crossed into the internet age from the early 1990s until now, computer security has went from an obscure, almost irrelevant topic to something that some knowledge of is or should be desirable for everyone. What would be thought of as paranoid 10 or 15 years ago is simply a good precaution these days.

While it shouldn't probably take an Einstein to work out where the trend is going, it is very likely to continue that way because of a few factors:

  1. There is money involved. People are making money (and lots of it!) from security breaches.
  2. Stupid and/or naive people. Because of this, the biggest security hole is usually found in meatspace, and can't really be patched.
  3. Features vs security; features make money now, security only pays off in the event of a breach.
  4. Due to factors 1-3, money will continue to be made from security breaches for the foreseeable future. Because money will be made, technology to breach security will continue to improve.
  5. Hardware and technological advances. GPUs, rainbow tables, specialized hardware, password crackers, you name it, it's either here or around the corner and has potential to make what was once secure, insecure.
  6. Google and a society of net savvy people. It is easy to research how to breach a given security measure, and if it exists it will probably be found.

An implication of the above is that it is often easier to go a bit overboard and design things to be very secure once, than have to review your security all the time. For example, even though xkcd pokes fun at 4096 bit RSA, it is now suggested to use more than 1024 bit. Since I started using 4096 bit RSA back in an era when 1024 was standard and googling 4096 bit RSA only yielded a link or two (I think it was 2004 or 2005), I don't have to change keys for those systems yet.

Another implication is that security by obscurity is probably only really useful in some forms of physical security, and not when your systems are readily examined. That is because the first step in a targeted attack is to research your target, and the next step is to research techniques to defeat the target (using Google). So you start thinking to yourself well, if I truly desire a good level of security I actually need to make it properly secure rather than just secure looking. And there can be a big difference in cost (time or money spent) between the two.

In order to make something properly secure, you have to analyze your defenses and brainstorm ways that they can be breached. This takes time. Also, if you are intelligent you will come up with an almost endless variety of methods for breaching a given defense. Implementing defenses takes time. Also, there are some serious downsides to some of the defenses. Ever encrypt a file and lose the key? You will understand what I'm talking about.

Looking at it like that, it is very tempting to take a "Nuke it from orbit" approach to security. Don't just shred papers in your trash, burn or compost them. Use Noscript and only whitelist sites if you trust them. Browse from a VM. Always use strong passwords. Never use the same email account, username or password on any internet forum. Don't confide in other people - they may burn you in the future. etc. etc. etc.

It is also easy to say that no one is interested in your data, or that if you have nothing to hide you have nothing to fear. That depends on who you are. Some people are going to be targets no matter what. A largish corporation is going to be a target. Wealthy people or their families will be targets. Having unpopular political views (even ones you no longer hold, or were popular at one time) might make you a target. Have an enemy or stalker? You are a target.

Some precautions are warranted. Some precautions may not be necessary. Sometimes it is hard to tell which is which, and to get your brain to shut off. So what do you do when your OCD combined with your interest in security causes a non-terminating loop of paranoia? It's easy to say "Look at the cost of implementing security measure vs cost of breach * probability of breach, but even doing that is way more than most people used to do and can consume a lot of time.

TL;DR: How do you balance the a tendency to want to secure everything to the point of paranoia, with some practicality? Are there some mental tricks that let you say "Whoa, stop, you are just wasting your time here."?

I think part of what may have made me start down a non-terminating loop is the idea that we should never take a risk if we can't live with the consequences. That statement can't be taken literally. If you take it to the logical conclusion, it means we can never travel anywhere. I guess you do have to have some sort of personal bar for making decisions, but I have yet to figure out a very quick heuristic for doing so.

Some things in life should also be 100% reliable/secure. For example, the button for launching a nuclear attack in response to a threat. If that button gets pushed, we are all toast. But if 100% security or reliability is impossible, what do we do in that situation? I pity the engineers charged with making the calls for things like that.

kalina
  • 3,354
  • 5
  • 20
  • 36
user1971
  • 783
  • 6
  • 9
  • 6
    @user1971 - Thanks for the "TL;DR" summary. 'Cause, seriously... TL;DR. – Iszi Apr 26 '11 at 14:05
  • 2
    Nice example of taking a simple question and making it scary enough to promote OCD :) Anyway, good question! – nealmcb Apr 26 '11 at 16:42
  • 2
    You DO realize that article was a joke, right? :) – AviD Apr 26 '11 at 18:32
  • 1
    Even so, I'd like to +100 this question! It really puts the light on where it should be, and we need more intelligent risk analysis / management questions here. – AviD Apr 26 '11 at 19:51
  • @nealmcb: Thanks. :) I needed the TL;DR to get across that there are some rational reasons to take security seriously to the point of OCD. Glad you liked. – user1971 Apr 27 '11 at 07:11
  • 1
    @Avid: It was a joke? I got the impression he was mostly serious with a slight bit of whimsy. e.g. I don't really see the sense in some of the things, e.g. giving the kids smart cards. Also some of it is unrealistic; e.g. I've had to compromise by giving my 3yo a password that is short but not an actual word. ;) And thanks very much for the kudos. – user1971 Apr 27 '11 at 07:17
  • 1
    @user1971 oh yes. I wasnt sure at first either, but the kids comments made it clear... also, read some of the comments on Schneier's post. – AviD Apr 27 '11 at 08:46

5 Answers5

24

I see two sides on this:

  1. most government bodies I review/audit tend to believe that because they secure everything then they are the most secure and that is the way it should be!
    In actuality the organisations that go down the security nazi route usually end up more open than those who are pragmatic about it. For example, locking down your users too hard with smart cards for physical and desktop access may be fine, but if you make it impossible for them to do their job (if they have been handed work from someone else who is off on holiday, but the work is assigned to a different card) then they get very good at workarounds - like sharing cards; like pretending to lose cards to get replacements and hoping the old ones don't get invalidated for a while, etc.

  2. most private sector organisations want to spend the bare minimum on this sort of thing to meet the regulator's requirements so the opposite argument holds true: I need to argue the case for more security.

At the end of the day it comes down to understanding the organisation and working out an appropriate level of control which meets their risk appetite and is understandable at board level. Appropriate here may not actually be very secure, or it may be Fort Knox with bells on - every organisation is different!

Rory Alsop
  • 61,367
  • 12
  • 115
  • 320
19

Yes, I think it's possible to be too paranoid. Although, also, I just finished talking security with a bunch of performing artists - people with no money who really need to spend their time promoting their work and creating new work... not building the Fort Knox of security just so they can use Facebook. They need common sense, a basic understanding of some areas of concern and a few cheap tricks. They don't need (and can't affort) smart cards, a OCD mindset to information output, or armed guards securing their physical perimeter. Certainly that's warranted for a major Financial institution or a national government, but there's a matter of degrees.

Also, even for a target system at high risk, you have to make sure that the security measures in place are security measures that will help. In a war zone, a 14 character password is a no go - too hard to type under stress. But physical security is a must have, so make passwords simpler inside the perimeter and make sure you have guys with guns around the edge. You have to design for the environment of your system.

One of the best things I've seen in recent years is the concept of "risk analysis". In a nutshell - how valuable is the stuff you are protecting? how expensive is the protection mechanism? how likely is it that someone will actually exercise the vulnerability you are protecting? how skilled and resource rich will your attackers be and what will they be after?

In the end, all security is a best effort, there is no perfect security. So what do you spend time perfecting? And is the money and time to protect those things worth the value of those things?

That's my thought for answering the how much is enough question.

bethlakshmi
  • 11,606
  • 1
  • 27
  • 58
  • 4
    Some good points here. But *make passwords simpler inside the perimeter and make sure you have guys with guns around the edge* is totally the wrong aproach, IMHO. First, avoid passwords. Second, note that besides firewall penetrations, insider attacks are generally a huge threat. Remember the old hacker joke: **“How are corporate networks like a candy bar? They both are hard and crunchy on the outside, and soft and chewy on the inside”** – nealmcb Apr 26 '11 at 16:59
  • 3
    +1, good answer. @nealmcb, note that @beth was talking about a warzone - the clear and present danger *is* men with guns. Better off, to have "easier" security (where the higher priority is jumping to the C&C / rocket control / intelligence systems faster) at the password level, and have compensating controls for the event the position is overrun (as it will be, eventually) and passwords retrieved, possibly by torture. Remember, this is WAR! ;) – AviD Apr 26 '11 at 18:37
  • @AviD and @nealmcb - exactly. I'm a defense contractor. I really mean a warzone. I've been asked numerous times to dumb down the user level security because the threat of the physical world was loss of life which trumped loss of information. As @AviD says - stuff that zeroizes ends up trumping stuff with serious authentication. I totally agree that in a corporate environment, the world is much different. As far as avoiding passwords - not always possible on all COTS systems. Sometimes you just have to go with something incredibly simple. – bethlakshmi Apr 26 '11 at 18:43
  • 1
    @beth, and having been on the using side of some of those systems, I can say that sometimes its just better off going with verbal authentication - that is, calling on the secure radio systems, and *talking to somebody who knows you*. personally. – AviD Apr 26 '11 at 18:53
  • @AviD - fair enough... I know of exactly some of those situations... although it raises the point that if you want the guy on the radio to stop using his radio in the clear, you'd better give him an easy way to key his radio and use it without authorization parameters... – bethlakshmi Apr 26 '11 at 18:59
  • ayup, that's really just moving the requirement around... – AviD Apr 26 '11 at 19:09
  • 2
    @avid @beth, thanks for the clarifications - it didn't actually occur to me that that wasn't metaphorical language. So this answer fits well in with the actual topic of this question. I'd still say it mainly changes the tradeoffs to some degree, and I'm still not sure how well folks with guns can secure a network, but +100 for keeping the actual threats and tradeoffs in mind and not confusing people with inappropriate security procedures. – nealmcb Apr 26 '11 at 19:48
  • 1
    @beth - This is a pretty good answer. I guess the difficulty lies in if you are creating a startup that will need to be very secure, but you are also underfunded. It is much easier to secure something well in the design phase rather than in after it has been implemented and the cash is trickling in (hopefully). – user1971 Apr 27 '11 at 07:31
  • @user1971 - right, and that's where TM comes in: it can point you where to spend the minimal amount of security resources you have, and where/how to spend it most wisely. – AviD Apr 27 '11 at 08:48
  • @nealmb - fair enough. You can secure the network physically (mostly) if you have a LAN. If you have a WAN, then you do need some sort of location to location encryption... And nothing's perfect... air wave tranmission is going to be perceivable, even if it's encrypted, etc., etc. – bethlakshmi Apr 27 '11 at 14:43
13

If you read Schneier, you'll be familiar with one of the basic premises of "smart" security that he also pushes a lot:

Security is a Trade-off.

It simply does not make sense to go full metal paranoid on your systems, since security can NEVER be 100% anyway (we used to be told the only way to be 100% is to unplug the computer... now we know that's still not 100%).
Also, it doesnt make sense to spend more than you're protecting... E.g. would you pay someone (bank, security guards, etc) 1000$ just to protect 100$? No, of course not.

And that's really the meat of it, what is appropriate depends a lot on who you are, where you are, what you have, who's out to get you, etc etc - as @Rory wrote.

So you'd be better off spending your scarce security resources in the smart place, and save the rest - and, it will be there for when there is an unforseeable hack (when, not if).

So, how do you know where to spend your efforts?
How do you know when it's secure enough?

As @bethlakshmi pointed out, "risk analysis" helps here.

More specifically, I recommend a structured Threat Modeling excercise, to help map out the protected assets, the threat agents (who wants to do you harm), threat paths / trees (how they gonna do it), and so forth. This can very effectively map out your likely threats, and even the less-likely ones.
Combine that with a quantitative risk analysis framework (such as FAIR), and you'll know exactly how much to spend on mitigating each threat.

As @Rory pointed out, you're likely to discover that even with your paranoia, you're putting too much money and resources in the wrong place.

AviD
  • 72,138
  • 22
  • 136
  • 218
  • 1
    +1 On the 100% security is impossible bit. I like the fact that you have pointed out to some actual methodologies for making the decisions. Thank you for that. I will have to research those and comment further. Thank you for your answer. – user1971 Apr 27 '11 at 07:33
3

The most effective way to get out of security OCD is to develop a threat model. Who are you opposing? A Russian mafia who wants to extort you with randomsware? The careless sniffing of the NSA? Perhaps the full brunt of the entire FBI/CIA/NSA complex (if you're Snowden or the like)? Or are you just trying to keep your little sister from playing online games on your account? How long does that threat have to attack you? Are they trying to beak in to deactivate an alarm system so that they can rob you 30 seconds from now, or are they trying to subtly inject Advanced Persistent Threats into your LAN to wrest control of your crypto-keys over a decade?

Are your attackers beyond using rubber hose crypto-analysis techniques?

These questions will allow you to understand what the threat actually is. Without that, it isn't really possible to determine how bad an attack actually is for you, and OCD paranoia creeps in trying to defend against everything.

Another thing that helps is a quote I rely on. You mention somethings being 100% reliable/secure, such as nuclear deterrents. A little secret: nothing is 100% secure, not even our nation's nukes. The closest I have seen is the L4 kernel, for which a mechanically aided proof demonstrated that the L4 kernel implementation, compiled by a perfectly 100% standards compliant compiler, behaves exactly as the API says it should. Anyways, the quote, which I always have attributed to Kevin Mitnick, but recently have found it more likely to be said by Gene Spafford:

The only truly secure system is one that is powered off, unplugged, cast in a block of concrete and sealed in a lead-lined room in an underground bunker with armed guards, and even then, I'd check on it every once in a while.

Cort Ammon
  • 9,206
  • 3
  • 25
  • 26
1

If you can identify 10 stupid things that even "expert" IT people do, and choose to not do those 10 stupid things, then you can build your confidence and feel better emotionally.

Example: Downloading program .exe's and even .lib's for open source programs from faraway places and unfamiliar people just because you are too lazy to compile the source code yourself.

Kerbie
  • 101
  • 1
  • 1
  • 4