39

Sometimes, security best practices protect you against attacks that are very improbable. In these scenarios, how do you defend the implementation of such security measures?

For example, password-protecting access to the BIOS of a thinclient. A BIOS without a password is a risk because an attacker with physical access can change BIOS configuration in order to boot from USB and access the data of the system without authentication. In this case, the attacker needs physical access, the thinclient does not store too much important information, etc. The risk is very low.

Other examples may be related to some measures when you harden systems.

In this case, is it better to not enforce this security measure to be reasonable with the rest of the company or you are opening little holes that will punch you in the face in the future?

Eloy Roldán Paredes
  • 1,507
  • 12
  • 25
  • Comments are not for extended discussion; this conversation has been [moved to chat](http://chat.stackexchange.com/rooms/32959/discussion-on-question-by-eloy-roldan-paredes-why-is-it-important-to-apply-secur). – schroeder Dec 14 '15 at 16:18

10 Answers10

74

The potential impact of this is not low, regardless of how much information the thin client stores.

Specifically the risk is that an attacker installs something like a rootkit or software keylogger in the operating system of the thin client, which you are unlikely to be able to discover by inspection. (This is a variant of the Evil Maid attack).

Should any administrator ever use that client in the future, it will be game over for the network. The thin client can also be used as a beachhead to launch further attacks against the network, or to conduct reconnaisance.

Protecting the BIOS prevents this from happening, by protecting the integrity of the thin client OS.

The wider lesson here: Best practice saves you the expense and difficulty of assessing every risk, which as this question demonstrates, is hard to do.

Ben
  • 3,697
  • 1
  • 18
  • 24
  • However you have needed to explain the risk in order to justify this specific best practice...is your recommendation to explore the reason after each best practice (IMO good idea) or to accept best practices as dogma without questioning the reason (IMO bad idea)? – Eloy Roldán Paredes Dec 11 '15 at 12:12
  • 9
    If you believe best practices do not apply to your situation, you may be correct, or you may simply not understand the full reasons. Of course you can investigate the reasons, and you may still think they do not apply. But you must consider: "How likely is it that I am mistaken?" One cannot know everything, it is not cost-effective. Investigating the reasons with the hope to avoid the practice, may in the end be more effort than implementing the practice. – Ben Dec 11 '15 at 12:37
  • I may accept that nobody knows everything but using this argument I can say that a best practice may introduce a vulnerability and be more insecure and you may not see it because nobody knows everything. So, with these arguments I'm still not sure if it is better to apply a best practice without justification or not... – Eloy Roldán Paredes Dec 11 '15 at 12:54
  • 55
    Best practice represents the expertise of others, and the accumulated knowledge and experience of many years. It is better to have understanding, but in the absence of undestanding, received wisdom generally suffices. – Ben Dec 11 '15 at 13:00
  • 8
    Or you might say that best practice is things that are so frequently the correct thing to do, that it is typically easier to do them than to decide whether or not to do them and then to ensure that decision remains correct in the face of future changes. – Steve Jessop Dec 11 '15 at 21:51
  • 2
    @SteveJessop - Exactly. I think "future changes" is one of the most important considerations. Suppose you did all the research, and (today) you could be 100% confident and even actually correct, that some "best practice" need not be followed. But some small change tomorrow, by someone else (even a vendor), might make that omitted best practice important. – Kevin Fegan Dec 12 '15 at 03:18
  • 1
    It's all well and fine and good to argue that a best practice isn't really worth the time/effort/etc/, and shouldn't really be a best practice. But trying to assess that **before you understand why the best practice is there in the first place** is not a good approach. When you thoroughly understand what the practice is and why is has been deemed a best practice as it has been, then you can make your own informed decision about whether you think it really should be a best practice. But, in the meantime, erring on the side of safety with stuff like this might be...prudent. – mostlyinformed Dec 12 '15 at 03:24
  • 4
    I had the same idea (keylogger rootkit) as you. However my conclusion differs a bit: risk cascades. Any small risk can often be exploited in a way that creates more vulnerabilities and the end result can be unpredictable. Assume any known risk can cascade ten-fold. – Reaces Dec 13 '15 at 16:39
  • This is why a quantitative risk assessment should be performed, including assessing both the **impact** and the **probability**. There are plenty of attacks that are improbable, the question is, what is the cost of implementing the countermeasures vs. the cost of an attack succeeding? In many cases, even a single successful attack can have devastating consequences, which is why it's recommended to protect against them, however improbable they may be. The higher the impact, the less the probability matters. – DrewJordan Dec 14 '15 at 14:08
  • @DrewJordan sometimes the cost of the assessment is more than the cost of the best practice... – Ben Dec 14 '15 at 16:03
  • Well, the OP was asking "how do you defend the implementation of such security measures?" You said the impact is not low, and the OP has already considered the probability... That's your assessment in this case. OP was asking how to justify best practices: this is how, with numbers. How much did it cost you to come up with this example and document it in the form of an answer? Very little I'd suspect, and now you've got a reason behind following the best practice. Yes, it may cost more than just implementing it in the first place, but now you've got the justification, which also has value. – DrewJordan Dec 14 '15 at 16:18
  • The way I would justify it would be to say "Fully Assessing the risk would cost two man-days, then explaining to every new customer why we didn't think it was necessary would be an ongoing project, but implementing this industry benchmark best practice would take four hours.... Now, I'd like to assess the risk properly because it's an interesting project, but I suspect that you as a business would rather I just implemented it, and then went onto the next job". – Ben Dec 14 '15 at 16:25
  • This is just not a good answer. You're free to disagree with the OP and his example. But his point was that some "best practices" are for unlikely events that may or may not be worth the time to mitigate them. The question was about defending not applying "best practices" not about protecting the BIOS. – Steve Sether Dec 29 '15 at 18:16
  • @SteveSether, Either you misread the question or possibly mistyped your comment? OP was not about "defending *not* applying best practices" it was about "defending *applying* best practices" in the face of pressure not to apply them. – Ben Dec 29 '15 at 20:55
  • @Ben You're right. It doesn't really matter though, since this question is answering the example, and doesn't address the actual question. – Steve Sether Dec 29 '15 at 21:18
21

Costs are the basis to evaluate risks and their mitigations. If the costs (monetary costs, operational costs, ease-of-use costs etc.) to implement a defence (best practice, etc.) exceed the damage caused by the risk being realized, then you are justified in choosing the accept the risk.

This is pretty basic concept in risk management.

schroeder
  • 123,438
  • 55
  • 284
  • 319
  • But for a lot of usual best practices (password protecting the BIOS, encrypting the hard disk of a server...) the risk may be very low because the probability is very very low having into account the other security measures already implemented. Maybe we could say that 80% of the risk is covered by 20% of the security measures...so why implementing the other 80% of security measures that are just best practices difficult to justify? – Eloy Roldán Paredes Dec 11 '15 at 15:45
  • 5
    They are not difficult to justify. Why do you think they are difficult to justify? – schroeder Dec 11 '15 at 15:58
  • 2
    you assess the risks (taking probability into account and other mitigating measures) and make a decision based on cost/benefit. – schroeder Dec 11 '15 at 15:59
  • 1
    For example, a not password protected BIOS is a risk if the attacker has physical access to the device, knows how to enter the BIOS, has a prepared pendrive with a bootable OS, has another device to store what he or she is going to steal, has the time to perform all this against the device without anyone noticing, has to avoid all the other physical security measures and all of this to compromise a thinclient that has almost anything stored just network access to other more relevant systems that have their own security measures... – Eloy Roldán Paredes Dec 11 '15 at 16:06
  • 5
    Your response does not answer my question – schroeder Dec 11 '15 at 16:54
  • I think they are difficult to justify because it seems that the risk is very low and the effort seems to high for such "apparent" low risk. – Eloy Roldán Paredes Dec 11 '15 at 17:10
  • 2
    But, that is exactly what I said in my answer. – schroeder Dec 11 '15 at 17:17
  • 2
    The key is the work "apparent". "Apparent low risk" is apparent because it is difficult to measure scientifically that a risk is high or low. Difficulties to measure *scientifically* the risk = difficult to justify a best practice against the risk. – Eloy Roldán Paredes Dec 11 '15 at 17:23
  • 6
    Risk is a combination of probability and impact. Impact is easily scientifically measured. Probability is almost impossible to measure scientifically in infosec risk. It's based on experience and judgement. I'm still confused about what your question really is, though! Even if risk is difficult to measure, you can still assign a risk level and weigh the costs of a "best practice" against it. – schroeder Dec 11 '15 at 18:25
  • 6
    @Eloy: once you have "attacker has prepared a pendrive", the rest is redundant since someone who is prepared has covered all those things (including the time issue: if they're properly prepared they just stick the pendrive in, power-cycle the machine, select to boot from the drive and then leave it to do its thing). It sounds like you're trying to justify *not* doing this by treating an attacker as if they're relying on a whole lot of coincidences coming together ("oh look, I just so happen to have a pendrive with me!"). But of course they're prepared: they're an attacker, it's their job. – Steve Jessop Dec 11 '15 at 21:56
  • 2
    @Schroeder Agree with almost everything you say above. Except "impact is easily scientifically measured". Actually that's a big problem with why a lot of people underestimate cyberrisk; they don't pay heed to very important risk elements that are hard to put a concrete number to. How much was the massive amount of bad PR and breach of trust with customers that came from the Target hack worth? How massive was the effect on the NSA and U.S. gov from the Snowden insider hack? But hard-to-exactly-quantify elements of risk are still vital to somehow factor into risk mitigation cost/benefit. – mostlyinformed Dec 12 '15 at 03:47
  • 1
    But key point is, of course: the vaguaries of measuring risk tend to lead people to frequently *under*estimate it, rather than overestimate it. And underestimating risk can be exceedingly dangerous, whereas overestimating it a little is simply inefficient. – mostlyinformed Dec 12 '15 at 03:53
  • 2
    @halfinformed The impact of a certain level of soft impacts, like reputational risk, can be quantified. What is impossible to quantify, which I already stated in my answer, is the likelihood of a certain level of reputational risk to be experienced as a result of any one security incident. It seems as though you equated "impact" with "risk" in your comment. – schroeder Dec 12 '15 at 17:53
  • @Schroeder Well, in my perspective what differentiates a "risk" vs. an "impact" can be an ambiguous thing, depending very much on what the surrounding context is. Sort of like determining whether a certain thing constitutes a "means" or and "end". You might say "Well, risk is a contingent event". But plenty of "impacts" involve uncertainty contingency as well. For instance, if I say "If I don't watch my diet my weight might increase. If my weight increases I might have a heart attack. If have a heart attack, I might die." is me having a heart attack there a "risk" or an "impact"? Or both? – mostlyinformed Dec 13 '15 at 03:48
  • Anyway, I suppose I was using the word "risk" above in a colloquial sense covering both what might be called "risks" and "impacts" in a academic sense. What I was trying to get across was that people tend to underestimate or flat out ignore negative possibilities that are difficult or impossible to precisely quantify with a monetary value. Which, in turn, tends to lead to organizations systematically underestimating the resources it is wise to allocate to cybersecurity as risk (in the broad sense of the word) reduction. – mostlyinformed Dec 13 '15 at 04:05
  • Security Best Practices exist because the vast majority of people employed in security roles do not have the tools necessary to assess risk. We live in the information age, where the greatest asset and liability of a company tends to be the information it owns. – Aron Dec 14 '15 at 01:44
  • 1
    @Aron That's an interesting statement. Do you have any substance to back it up? – schroeder Dec 14 '15 at 04:17
  • @schroeder I am going by the old statistic that most businesses fail to recover from a data breach (and go out of business within x months). But further research seems to indicate that the original source of this oft quoted statistic made it up on the spot ....>_ – Aron Dec 14 '15 at 04:33
  • @Aron assessing risk before an incident and failing to recover from an incident are 2 very, very different things. The OP is asking about assessing the risk of an activity before it even gets to the point of an incident. – schroeder Dec 14 '15 at 04:37
13

Best practices are not laws but recommendations. They usually come with an explanation why they are recommended. If you feel that the explanation does not apply in your case your are free to ignore the recommendation. You might even be right with this feeling and thus you can save costs.

But be aware that depending on your working environment you cannot simply ignore the recommendation, but you need to document why you feel that it can be ignored. It might also be that you are responsible in person if something bad happens because the best practices got ignored. Thus it is often easier and less risky to follow the recommendations than to ignore them.

Steffen Ullrich
  • 184,332
  • 29
  • 363
  • 424
  • 1
    I take your advice on not applying a best practice if it has not sense to me but I disagree in two things: 1) In my experience best practices usually come without the explanation about why it is important...if is just accepted. 2) I don't think that it is easier to follow recommendations than to ignore them...it is easier to ignore them...specially if you have to fight against other departments to implement some security measure not correctly justified. – Eloy Roldán Paredes Dec 11 '15 at 12:16
  • @EloyRoldánParedes: it highly depends on your working environment (for example industry vs. small company) and who is responsible if something goes bad. If you can shift the responsibility to somebody else it might be easier for you to ignore the recommendation. – Steffen Ullrich Dec 11 '15 at 12:21
  • 2
    @EloyRoldánParedes - Some industries *require* certain things (HIPPA for healthcare, PCI for credit cards, etc). Often, there's some wiggle room in the requirements, where you can choose not to do something, but only if you document why you aren't and how it doesn't pose a risk. – Bobson Dec 11 '15 at 13:28
10

There are a number of factors to be considered here:

  1. What's the cost of implementing the measure? Protecting all BIOSes can be a pain, but isn't a significant outlay of money or effort.

  2. what's the risk of a bad thing happening? The chances are fairly low, but the impact is moderate (keyloggers are mentioned, there are a number of other issues with someone executing their own code on your system, namely that it is no longer your system). The actual risk is low-to-medium.

  3. What's the impact of implementation? Are there any legitimate reasons for the average user to be accessing the BIOS of their thin client? Not really.

So we've got something where mitigation doesn't cost much, doesn't have a major impact, and protects against a low-to-medium risk. I'd say that's a good argument to implement it, personally.

Jozef Woods
  • 1,247
  • 8
  • 7
  • 1
    There is a fourth relevant factor: What is the *effectiveness* of implementation? If implementing the measure completely and perfectly eliminates the risk evaluated in point 2, then that's a very different situation to evaluate than if implementing the measure just shaves a small amount off either the severity or the probability of that risk, and the former can justify a much higher cost (point 1) and impact (point 3) than can the latter. I would put the given example of BIOS passwords somewhere in the middle of that scale, perhaps a bit closer to the latter than the former extreme. – Matthew Najmon Dec 12 '15 at 19:52
6

In many cases it boils down to a question about what very low risk means. If you have an exact knowledge of how low the risk is, you can do calculations on the expected cost of not mitigating the risk.

In reality you rarely know the exact risk. Often potential security problems are dismissed as very low risk without any attempt at an actual assessment of the risk.

In many cases it requires less effort to solve the security problem than to evaluate the risk with sufficient accuracy to make an informed decision on whether to address the security vulnerability. For me that is the primary argument for addressing even security vulnerabilities of very low risk, because it means you save most of the cost associated with evaluating the risk.

Who can always tell the difference between a security problem known to be low risk and one assumed to be low risk?

Sometimes it is more productive to evaluate security problems against each other rather than evaluating each individual security problem against a low risk threshold.

In your specific example you could compare the risk of malicious change of BIOS settings to the risk of network connection on the device being bugged with a MITM device.

kasperd
  • 5,402
  • 1
  • 19
  • 38
  • 1
    "In reality you rarely know the exact risk. Often potential security problems are dismissed as very low risk without any attempt at an actual assessment of the risk." Would give +1000 if I could. In fact, the whole concept of creating a safety margin when dealing with systems--whether the security of a network against a local attacker, the resistance of a building to earthquake, etc--is based on building in some degree of prudent protection against the risk that, well, your assessment/s of risk themselves be off somewhat. – mostlyinformed Dec 12 '15 at 04:16
4

The posture you adopt is based on the environment you are operating in. At the very least, the security policies you create are proportional to the budget you have and the threat models you have in mind. Without an assessment of the dollar value you are protecting, you will be spending an exorbitant amount of time figuring out where the holes are. On the other hand, criminal elements use low/no protected elements to search for higher value targets.

Also bear in mind that vendors purporting to help mitigate risk do not come with the silver bullet to solve every single problem. Ultimately you and the business are the only ones responsible for the security. From a business angle, you could pass on the costs of security errors/flaws, but it might not help in the long run. It's a pretty hoary world out there at the moment.

m2kin2
  • 89
  • 2
4

Sometimes, security best practices protect you against risks that are very unprobable. In that scenarios, how do you defend the implementation of these security measure?

You shouldn't have to defend anything; let the numbers speak for themselves:

  1. Probability of attack X occurring = P.
  2. Total Cost of X if it occurs = TC. (This needs to include diagnosing and fixing the breach, lost revenue, reputation, lawsuits, crisis management, etc.)
  3. Cost to take security measures to prevent X in the first place = C.

The equation is simply:

C < P * TC

When the equation is true, the security measure should be implemented. Of course, the difficult part is calculating P and TC to begin with, but you may find you can be overly conservative on TC and overly optimistic on P and it will still be a worthwhile investment. Plus you also get the added benefit of publicizing the security measures to investors and/or customers.

This is obviously an oversimplification of all the factors to consider, but from a ROI point of view, the logic should hold up.

TTT
  • 9,122
  • 4
  • 19
  • 31
  • I'm sorry but being able to calculate a scientific probability of an attack seems to me a falacy. For example, looking in past events very usually brings to a probability of 0 and this is not real. Other possibility is "inventing" the probability or performing an "approximate calculation"...also not very scientific IMHO. I know that this is the standard in risk methodologies but I disagree in this approach. If we agree that we need a metric to measure I like more RAV calculation from OSSTMM. – Eloy Roldán Paredes Dec 11 '15 at 17:09
  • @EloyRoldánParedes - Just because we don't know a *good* way to calculate the probability doesn't mean it isn't possible to do so. And if the probability really approaches 0 then maybe you shouldn't worry about it unless it's comparatively easy to mitigate it. That's basically the point of the equation. For example, the owner of Ashley Madison once claimed the value of his company to be $1B. Arguably the cost of the breach could be the value of the entire company. If that breach had a P of 1 in 1,000 then the cost to add better security needed to be less than $1M - should have done it... – TTT Dec 11 '15 at 17:28
  • 2
    Another way of looking at it is this: If C / TC < P, you should implement it. C should be very well known, TC should be pretty well known (at least within [Fermi estimation](https://what-if.xkcd.com/84/) limits), and P is **a risk level you consider appropriate** for having to pay TC, be it measured in money, trust, lawsuits or apologetic emails. – l0b0 Dec 11 '15 at 23:49
4

So this is kind of an extension of schroeder's answer.

I think there's a slight mix-up here in what we mean by 'risk'. You're looking at risk from the perspective of the computer system (password the BIOS, install AV, use a firewall, etc). The perspective that Schroeder is talking about is from the business' perspective: once the machine has been compromised, what is the associated cost?

When a business has a 'risk', there are a few things it can do:

  • Accept the risk - assume nothing bad will happen, and if it does it won't be a huge problem to deal with.

  • Risk avoidance - Avoid exposure to the risk at all.

  • Risk limitation(mitigation) - Reduce the likelihood of the risk happening.

  • Risk transference (CYA) - Make sure someone else is responsible for the risk.

To be clear, the two types of risk that are being discussed in the thread:

  1. The risk a business has should a particular system be compromised. This can include customer financial information (credit card numbers, etc.), as well as proprietary information, trade secrets, classified information, etc.

  2. The risk OP is proposing: the risk of a computer being compromised in a specific way.

Here's the nuance I'm getting at: #1 is the business' risk. The strategy to dealing with this risk is typically through risk limitation (mitigation). That's where #2 comes in. #2 mitigates the likelihood that #1 will occur. We consider many different methods in which a computer can be compromised in #2, and implement preventive measures in order to mitigate #1 as much as possible. Since a lot of 'best practices' are cheap or no-brainers (i.e. their cost is easily justified), it makes sense to follow them even if on an individual basis their attack vector is highly improbable.

Shaz
  • 374
  • 1
  • 4
  • Sometimes the business risk is not very clear for devices that do not store business data or perform business critical functions but are a first door to an infrastructure that is protected with other security measures. In this scenario I suppose that it is better not to implement the best practice but I'm worried of not implementing a low of non-important best practices and then have an accumulated very high risk. – Eloy Roldán Paredes Dec 11 '15 at 17:20
  • 3
    @EloyRoldánParedes The problem there is that you essentially need to treat all of your systems equally. You can try to justify lower security practices on Jim's computer since he works on the factory floor and only uses it for email, but a hacker will see a computer with a lower security profile as a prime target. They will hack Jim's computer and use it as a launching point towards your servers, financial information, etc. – Shaz Dec 11 '15 at 17:24
1

In regards to your example:

All access to a system/network/forest/thing that contains secure data, must be kept secure from end to end, no matter how small the end is.

In regards to security in general:

All access to a system/network/forest/thing that contains secure data, must be kept secure.

Why?

There's a reason for this. A single access point to a network that contains secure data, is itself in need of security. Information of what has happened on the system as a WHOLE must be secure to keep the secure data all the way in the smallest part secure. If a single link in the chain breaks, the chain is broken.

But how do I break that chain if there is no security information on the client? Simple, by observing the network. If I can see what goes on in the network, I can learn what keys will get me through what locks, how, and in what ways. Then I have an easy way to get into your network, and get your data. That is why, from end to end, ALL activity must be kept secure.


Let's go to the example of punching you in the face:

  1. You think your home is secure.
  2. You have bio metric finger print scanners(LIKE ON TV) that mean only your fingers get you inside
  3. You touch a door knob on a store front 70 million miles away, six years ago
  4. I lift that print
  5. I identify your print(through some long drawn out process)
  6. I use that print to open your home while you sleep
  7. I punch you in the face

Even though it was miles away, and that print had many other prints, and so forth and on wards, I could still eventually find your print in there, and use it to open your home. This is very improbable, but if I REALLY want to, I can still punch you in the face with time, effort, and some patience.


Still though this doesn't do more than present an example. Let's instead state the truth of security:

The most secure system in the world in buried underground, in an unknown location, burnt to a crisp, and destroyed.

Sure that's a little tongue in cheek, but it gets the idea across. So yes this security best practice should still be used. After all your system isn't the most secure system in the world, so you should try and get close.

Robert Mennell
  • 6,968
  • 1
  • 13
  • 38
0

As it has already been said, security is mainly a risk vs cost approach.

But IMHO, there is another point to considere: as in building security, a reinforced door is little use if the window is left open. To correctly evaluate a risk you must know what to want to protect against. So protecting BIOS with a password because it could be a vector for an attacker that could have a physical access. IMHO, if an attacker can gain physical access to a computer, this computer is compromised because anything could happer: a physical key logger in the keyboard itself, an network analyser between the internal connector and the external plug, a malicious USB device inside the box, etc.

But BIOS password is still good practice because it can prevent further infection if an attack reaches the post, and mainly because it prevents a careless or clumsy user to break or compromise himself its own terminal.

Serge Ballesta
  • 25,636
  • 4
  • 42
  • 84