108

An operating system has reached End of Support (EoS) so no more security patches are coming for the OS ever. An embedded device running this OS needs to be updated to a newer version. However, the engineers who designed the original product feel that the machine is not hackable and therefore does not need to be patched. The device has WiFi, Ethernet, USB ports and an OS that has reached EoS.

The questions I am asked daily:

  1. We have application white-listing so why do we need to patch vulnerabilities?
  2. We have a firewall so why do we need to patch vulnerabilities?

And the comments I get:

Our plan is to harden the system even more. If we do this then we should not have to update the OS and continue patching it. No one will be able to reach the vulnerabilities. Also we will fix the vulnerabilities in outward-facing parts of the OS (even though there is no ability for them to patch the vulnerabilities themselves) and then we can leave the non-outside facing vulnerabilities unpatched.

I have explained in detail about Nessus credentialed scans. I am not sure how to get my point across to these engineers. Any thoughts on how I can explain this?

UPDATE: The system is being patched. Thanks for everyones responses and help.

Ken
  • 1,091
  • 2
  • 6
  • 5
  • 3
    Thank You. Our customers will not want a system with unpatched vulns. I belive a hardened system with vulns is not aceptable anymore. – Ken Dec 22 '17 at 09:47
  • 1
    So, is your concern the vulns or the reputational impact of customers *knowing* there were patches that could have been applied? Because those are 2 very different things. – schroeder Dec 22 '17 at 09:52
  • 1
    I can see that on a per vuln basis. For instance disabling SMB for Wanna Cry. However their thought is that all future vulns can be mitigated by hardening. I can crash the system by fuzzing protocols. And WiFi KRACK coudl be difficult to mitigate without OS vendor support. – Ken Dec 22 '17 at 09:54
  • 1
    @schroder My main concern is the reputational impact of customers knowing there were patches that could have been applied. OF course I care a lot about patching vulns as well. However in this case reputation – Ken Dec 22 '17 at 09:57
  • ok, then you just introduced *another* opinion to challenge (hardening can counter all possible future vulns) and you just came up with a great challenge (KRACK) - I think you are hoping for a silver bullet argument using a scattergun approach, but they don't exist. The core assumption appears to be this latest one (all the others you mentioned stem from this), and that is *easily* challenged as you just did – schroeder Dec 22 '17 at 09:58
  • 1
    If reputation is the key concern, then getting deep into technical details is not the approach you need to take. You need to survey your customers and ask what *they* think. They are your risk subject, study them, not the OS. – schroeder Dec 22 '17 at 09:59
  • 193
    *" the engineers who designed the original product feel that the machine is not hackable"*. I do not know the engineer nor the machine, but I do know that he/she is **wrong**. – dr_ Dec 22 '17 at 10:42
  • 171
    "System is unhackable" This is where you laugh out loud. Anyone who thinks a system is unhackable does not understand how amazingly clever and resourceful attackers are. All systems are hackable given enough time and resources. The goal of security is to make the investment required a successful attack higher than the benefit of attacking. – jpmc26 Dec 22 '17 at 11:22
  • 13
    _Asking the customers is a great idea._ Just consider that if reputation is actually your main concern and your customers are even just remotely familiar with IT subjects, when you ask your customers I would not mention to them that your engineers feel that the thing is unhackable; such claim has rightfully the potential of hurting reputation greattly. – SantiBailors Dec 22 '17 at 14:06
  • 97
    Coming soon to a DailyWTF article near you... "But the system is unhackable! How could this have happened?! It must be KEN'S fault! That incompetent jerk broke the firewall whitelist with his patching!" – corsiKa Dec 22 '17 at 15:57
  • 12
    Despite how anyone feels about a system's security, it seems to me that you would want to try to patch every known vulnerability or at least have a very good reason why you couldn't. Because if you *do* end up getting hacked, "we didn't bother to apply any patches because we thought it was unhackable" is not a very good defense in any following investigation. – Herohtar Dec 22 '17 at 16:36
  • 1
    Is there is a test system intentionally built for hack testing? If not, hacking any other system is an efficient way to become unemployed. Moreover (personal experience), proving someone else wrong can have the same effect if that person has enough influence. OP is in a tough situation. – WGroleau Dec 23 '17 at 13:16
  • 5
    Ask the so-called engineer if they're willing to bet 10000$ dollars that it's not hackable. If they're still so stupid to say yes, offer 5K (in the right places...) to anyone who can hack the device. You make 5 thousand and your point... – Pete Dec 24 '17 at 02:33
  • 3
    An option is to exploit the machine and report what you did, saying if I exploited it, anyone could. That's what white hat hackers do and it's the easiest way to shut these engineers up. If you don't want to do that, then document the fact that you told the engineers that they are wrong, so when the day comes and the system gets hacked you won't be the one to blame, you could just say i did my job – Lynob Dec 24 '17 at 12:15
  • 1
    I belive that [this guy](https://www.schneier.com) said that anyone can create a system that he himself cannot hack. – dotancohen Dec 24 '17 at 12:24
  • 5
    The existence of vulnerabilities kinda confirms that the system is *not* unhackable. – IS4 Dec 25 '17 at 23:22
  • 1
    I was under the impression that EoS meant no plans for functionality improvements, perhaps no *plans* for fixes, and that security updates were in fact semi-common past EoS. OT: you should not be having this discussion with the engineers, rather with their and your boss. Ultimately, if your company still supports this device, the OS support becomes your responsibility, it doesn't matter if upstream support ceased. –  Dec 26 '17 at 07:35
  • 2
    This is a problem of communication skills and not technical skills. Listen to them, ask question, what their argument is. Agree with them and let them know they have done a great job so far. Never use the word "but". Finally, let them sign a contract that states that you have warned and that they know what they are doing and you're not responsible for anything any longer. – Thomas Weller Dec 26 '17 at 17:21
  • @jpmc26 I think the goal should be to make the cost & impact of a successful attack with security measures lower than the cost & impact of an attack without security measures. A determined suicide attacker might go Steve-Jobs-thermonuclear on you at all costs. – Arc Dec 26 '17 at 17:24
  • 1
    @Archimedix The problem with your idea is that it places no upper bound on security spending. You cannot sink infinite time and money into security, and at some point, you must simply accept that if an attacker gets that far, you have simply lost. I considered saying, "make an attack cost prohibitive," but as you point out, there may be attackers with vast resources that you cannot hope to match. – jpmc26 Dec 26 '17 at 22:34
  • 1
    @jpmc26 the upper bound is stated as the sum of cost and impact (or better, risk, which is impact times probability of occurrence). If your security measures are insufficient, your overall cost and risk is too high to operate long-term. If they are too expensive, you gain nothing. It’s basically insurance maths, except that insurances add their own security measures and profit on top of that. – Arc Dec 27 '17 at 06:04
  • 2
    @IllidanS4 - Took the words out of my mouth: "Unhackable... vulnerabilities..." is an oxymoron. – Vector Dec 27 '17 at 07:45
  • 1
    Is it connected to the outside world at all? I guess there's always social engineering too :| – rogerdpack Dec 27 '17 at 16:56
  • 3
    Give up. If you're talking to someone - an _engineer_, no less - who legitimately believes that _any_ device is unhackable then you cannot approach them with an intelligent argument. – CGriffin Dec 27 '17 at 17:06
  • The human element makes ANY unhackable system hackable. Is your reputation set up to handle the human element as well as the unexpected technological adversary. – KalleMP Dec 27 '17 at 19:46
  • Your "engineer" is mistaking "No evidence of a problem" for "Evidence that there is/can be no problem". – Jared Smith Dec 28 '17 at 02:28
  • Makes our concern heared and move on, don’t waste your time convincing people (unless younger paid format, that is) – eckes Dec 28 '17 at 06:15
  • A white list is as strong as it's weakest user. A firewall has to have open ports to let the application data in / out. However, whether you NEED more security or not is impossible to infer here. – RandomUs1r Dec 28 '17 at 23:48
  • Vague question. What do you mean the device "needs to be updated". Why does it "need to be updated"? You need to give more specifics. – Tyler Durden Dec 29 '17 at 02:36
  • Any major change requires a cost benefit analysis. It seems the disagreement is on the result of the analysis, and since the analysis in in your heads, it's not even the same analysis. So write down the analysis first, agree that the necessary points are listed, then discuss it. – Peter Dec 29 '17 at 14:38
  • There is no such thing as an unhackable system. There are only systems that are sufficiently hard to hack that the 'loot' isn't worth the effort and/or resources. You ought to ask the question: is this system currently sufficiently hard to hack, or is it easy enough that the 'loot' is worth the effort? – Tijmen Dec 30 '17 at 11:50
  • Random question: The system didn't happen to be an old OpenVMS VAX or ALPHA server, was it (e.g., DefCon 9 conference in July 2001)? – honeste_vivere Mar 10 '21 at 20:54

15 Answers15

182

The trouble with the situation (as you are reporting it) is that there are a lot of assumptions being made with a lot of opinions. You have your opinions and you want them to share your opinions, but they have their own opinions.

If you want to get everyone to agree to something, you need to find common ground. You need to challenge and confirm each assumption and find hard data to support your opinion or theirs. Once you have common ground, then you can all move forward together.

  1. You have whitelisting: great, what does that mean? Are there ways around it? Can a whitelisted application be corrupted?
  2. What does the firewall do? How is it configured? Firewalls mean blocked ports, but they also mean allowed ports. Can those allowed ports be abused?
  3. No one has access? Who has access to the device? Are you trusting an insider or the ignorance of a user to keep it secure?
  4. What happens if someone gets local access to the device? How likely is that?

As an information security professional, your job is not to beat people over the head with "best practices" but to perform risk analyses and design a way forward that limits risk under the risk threshold in a cost-effective way. You have to justify not employing best practices, but if the justification is valid, then it's valid.

schroeder
  • 123,438
  • 55
  • 284
  • 319
  • 13
    I apreciate this. I think I am guilty beating people over the head with "best practices". I was able to find evidence that the white listing can be hacked. There is access to the machine but not the OS desktop. Thanks Again. – Ken Dec 22 '17 at 10:32
  • 10
    @Ken what you should read between the lines of the last paragraph is that best practices aren't always best. – Blrfl Dec 22 '17 at 13:16
  • 4
    Best practices are based on current knowledge. Nobody is omniscient, so current best practices do not stay that way forever. Once current best practices become obsolete, it is due to some fundamental flaw newly discovered. – Nelson Dec 22 '17 at 16:34
  • 12
    Best practice is not to rely on an enumerated list of known-today best practices, but to make your system foolproof by _not_ assuming it's unhackable, and designing it accordingly. – Lightness Races in Orbit Dec 22 '17 at 17:41
  • 11
    @Ken "There is access to the machine but not the OS desktop" - I open your machine, connect a disk with my own OS on it, boot it, and then make whatever changes I want to anything and everything on your system. Or I just open it, take your disk out, and sell it to the highest bidder. – aroth Dec 23 '17 at 13:02
  • Let's add to whitelisting. That blocks unauthorized executables from running. However, if an allowed executable has a vulnerability, whitelisting will do NOTHING to stop that. The exe, if it is whitelisted, will run - complete with vulnerability. – baldPrussian Dec 28 '17 at 02:26
67

If someone tells me that their machine is not hackable and I ought to believe them, I immediately conclude that

  • The machine is kept guarded under Fort Knox/High security prison conditions, with 24/7 guards and security cameras,

and also one of the following:

  • The machine has no exchange of information of any kind (no usb, ethernet, firewire, serial, parallel, etc. of any kind)

  • The machine is permanently turned off.

Martin Argerami
  • 863
  • 6
  • 6
  • 116
    24/7 guards? Well there you've just got a perfect attack vector! Never underestimate the power of the insider threat. – forest Dec 22 '17 at 13:00
  • 42
    The only unhackable system is inside a safe that's been welded shut and pushed off a boat into the Marianas Trench. Then the boat crew was all shot to keep its location secret. – Monica Apologists Get Out Dec 22 '17 at 14:06
  • 18
    Also, I always get irked at hearing things like "the most secure computer is an unplugged computer", because that completely violates the Availability CIA triad principal. An unplugged computer is the ultimately insecure computer, complete denial of service. – forest Dec 22 '17 at 14:14
  • 4
    @Adonalsium That's not unhackable. The location can be brute-forced. The real answer is, "hackability" is not absolute. I typically consider a system without a network connection to be "secure", to the extent that someone would need physical access to compromise it. In high-security situations, air-gapping (including USB and other device ports) is usually considered the highest level of security. – Micheal Johnson Dec 22 '17 at 15:15
  • 16
    During our ongoing GSS audit our security people keep hitting us over the head with "gates, guns, and guards are not sufficient" to protect air-gapped media. btw, we're located inside a US army base. – doneal24 Dec 22 '17 at 16:12
  • 59
    @Adonalsium is pulling the punches. We all know the only unhackable system is one that's crossed the event horizon. – R.. GitHub STOP HELPING ICE Dec 22 '17 at 18:26
  • 1
    An addition for your list: "The system to be hacked does not even exist." – jpmc26 Dec 23 '17 at 03:38
  • 4
    @R.. oh certainly that a system that has crossed the event horison is hackable, you can send information that way. You just can't get a response on your side :) –  Dec 26 '17 at 07:30
  • 5
    Just use a brick. Nobody can hack that, and it's about as useful. – Byte11 Dec 26 '17 at 21:36
  • @DimaTisnek: Arguably any information "on the other side" simply doesn't exist. – R.. GitHub STOP HELPING ICE Dec 26 '17 at 23:49
  • 1
    @R.. ever heard of Hawking Radiation? The information will eventually be radiated. It's just a question of reconstructing it. ;) – Amani Kilumanga Dec 27 '17 at 01:41
  • Not the information, just mass/energy. – R.. GitHub STOP HELPING ICE Dec 27 '17 at 02:22
  • 3
    This does not appear to provide an answer to the question (or it's an unhelpful answer, if you're suggesting replying to them in a sarcastic manner that's only helpful to someone who already knows why these things should be true). You might want to [edit] and rephrase to more explicitly address what can go wrong with having the device in a less secure location and why having any kind of information exchange is a problem. – NotThatGuy Dec 27 '17 at 06:50
  • 2
    @NotThatGuy: We have found that trying to explain why these things are true leads only to complains of why isn't x being fixed, where x is some fundamental law of computer security. – Joshua Dec 27 '17 at 16:44
  • 1
    @Joshua That doesn't change the fact that this isn't an answer (although a minor rephrasing can change that, but then it would be a fairly low quality answer IMO - it needs some meat added to it). – NotThatGuy Dec 27 '17 at 22:04
  • 1
    A running machine always "exchange" some kind of information: it, at least, consumes electrical power and generates heat. Attacks were done in the past by analyzing power consumption and heatwaves, as well as timing issues. So the simple fact of a machine running already creates information that can be exploited. – Patrick Mevzek Dec 28 '17 at 18:23
  • @jpmc26 you can make that happen by just parting it out – Aaron Dec 28 '17 at 19:14
40

Because you want a multi-layered security strategy with defence in depth. You have a firewall, but what if there's a security vulnerability in your firewall? What if some application exploit gives user-level OS access, and then an unpatched OS vulnerability allows that to be escalated to root access? For proper security you need to patch all known vulnerabilities, not just the ones that you believe can be exploited on your system, because a combination of an unknown vulnerability and a known vulnerability that you believe can't be exploited may allow compromise where either on its own would not, and you can't patch against the unknown vulnerabilities.

Mike Scott
  • 10,118
  • 1
  • 27
  • 35
  • @KalleMP this is *one* answer and assumes the ability to patch. Risk assessments inherently are "value judgments" and if security was as easy as "just do all the right things" then the infosec profession would need only technicians to run around clicking buttons. Reality, as the OP's situation states, is far more complicated than that. – schroeder Dec 28 '17 at 08:53
  • I was looking for the phrase "defense in depth." That's the simplest answer to these people and usually the best one. – Wildcard Dec 29 '17 at 04:06
10

The reason is simple, security is being applied in layers. For example, to connect to an important database, one needs first to get in the network of the database (pass firewall), add own IP address to the list of the clients allowed to connect, and then initiate the connection with username and password. Any of the layers makes the other two redundant. The problem is "what if". Let's think of the default scott/tiger login of old Oracle or an employee inadvertently forwards a port to the public internet. The firewall may be blocking only TCP, while the server also listens on UDP, or IPv6 is mis-configured, and security only applies for IP4. This is why good security comes in layers, attempts are being monitored and security experts learn from the attempted (hopefully failed) attacks, or they inspect activity on honeypots. Also, zero day exploits (ones that apply even to the latest patch) are less likely to succeed in a layered environment, since the attacker will need an exploit for each layer.

No device is not hackable, just it hasn't been hacked before. Either there is little interests on your device and/or the payoff is very low. Zero day exploits may still exist.

Also, some Android devices simply cannot be upgraded beyond a specific version. Knowing that an adversary has such a device is an open invitation for hacking, since the device name/brand carries the exact recipe of how to hack it.

Maintaining a device without active support is dangerous also from the functional perspective.

Security is not necessary designed to protect from outsiders (firewall) but also from insiders. I don't know the context your device is running, but given what you write, it may be vulnerable to somebody already inside of the firewall.

6

There are no unhackable systems. For those mentioning airgapping, there are plenty of examples of actual hacks or potential hacks on airgapped systems. Stuxnet is probably the most famous (and most extreme) example. Some others include van Eck phreaking, acoustic analysis, or other side channel attacks.

There are ways to mitigate vulnerabilities that don't involve patching. For instance, if the system is vulnerable to KRACK is it possible to simply disable WiFi? If WiFi is permanently disabled there should be no need to apply any update involving WiFi. Likewise, if there are specific applications on the system that pose a vulnerability (like Java, .NET, Flash, Browsers, etc.) you could simply uninstall those applications. There's no need to update Java if it's not even installed.

With OS upgrades this is admittedly more difficult. You need to be aware of the potential vulnerabilities, then you need to mitigate them. The benefit of using a supported OS is that someone else is (presumably) already doing the first part and half of the second part for you.

A fully updated/upgraded system is not a secure or unhackable system. But it does tend to minimize the risk of KNOWN vulnerabilities. To echo Schroeder, risk analysis is more important than either 'hardening/locking down' or blindly 'upgrading' and hoping that either will make you more secure.

Meridian
  • 61
  • 1
  • 5
    Stuxnet was a result of a violation of the airgap policy, and van eck phreaking and other such attacks violate confidentiality, but not integrity. It would be a far cry to call them "hacking". As for there being "no unhackable systems", EAL7+ stuff comes pretty close! – forest Dec 22 '17 at 22:02
  • 3
    That's a nice distinction between confidentiality and integrity. OP didn't mention the goal of the security for the system and my own experience is more heavily focused on risk associated with confidentiality. – Meridian Dec 22 '17 at 22:25
  • https://www.wired.com/2017/02/malware-sends-stolen-data-drone-just- pcs-blinking-led/ - again, could argue this is a violate of the airgap policy - but still, bit of fun I thought I'd share. –  Jan 24 '18 at 09:54
5

No system is truly "unhackable." However once we have decided that a system is "unhackable" enough then we do not have to maintain a channel for security patches.

For a concrete example, our "unhackable" system controls a security camera. The camera's job is to look at a fixed location. Every setting is either constant or the system is smart enough to adjust by itself. The system streams video data and does not need any input from the user.

We could have the system run ssh so that we could log in periodically and apply security patches but that actually opens up a (very small) security hole. An attacker could use ssh to hack the camera. (Good luck hacking ssh).

So it is a trade off. If you honestly believe that you will never need to apply a security patch then you might decide that leaving that channel open is not worth it.

I got this idea from a presentation I attended where someone described the systems they were building for the government. The components of the system were short lived virtual machines (usually less than one day). Each virtual machine was immutable and disposable. The plan was that if they needed to apply a security patch they would just dispose of the machines in an orderly fashion and create new ones. The virtual machines did not have ssh.

The government security auditor blew a gasket and made them install ssh so that they could apply security patches. The ssh server did not provide any security value and was in fact a hole.

However, thinking about it, this (and my camera) example are just security updates through a non traditional channel.

What about

  1. a camera deployed to Mars ... everyone knows about the camera and everyone can view the camera's data
  2. a camera that exists secretly behind enemy lines (if the enemy knew about the camera, they could easily take it ... do we want to maintain a channel for security updates).
emory
  • 1,560
  • 11
  • 14
  • 2
    Even if you wish to apply security patches later on, a viable way around that would be to require physical access, combined with tamper protection. – Nzall Dec 22 '17 at 17:33
  • 7
    But the camera presumably has to upload its footage to a remote location, suppose an attacker spoofs its DNS to make it upload to the attacker’s server? And suppose there’s a buffer overflow in its network stack that the attacker can exploit with a malformed packet? Now it’s not unhackable after all. – Mike Scott Dec 22 '17 at 18:36
  • 5
    Also, the security camera accepts outside input. What if there's an exploitable bug in the image processing software that allows someone to hack your system via the camera? – Rob Watts Dec 22 '17 at 19:52
  • 3
    `Good luck hacking ssh` You've never been given a quote for an OpenSSH 0day before, have you? – forest Dec 22 '17 at 22:01
  • I think @Nzall's point is valid. In this example, we are still applying security updates - just changing the channel in which they are applied. – emory Dec 23 '17 at 01:25
  • @MikeScott maybe the camera is broadcasting to the world. the camera just landed on mars and is taking pictures and broadcasting to the world. if we left open a channel for security updates then an attacker could flood it with noise. the attacker does not get in to apply their update but prevents us from applying our update and maybe wastes the camera's power resources. – emory Dec 23 '17 at 01:31
  • @forest are you implying that it is easy to hack ssh? – emory Dec 23 '17 at 01:32
  • 1
    @RobWatts I think you are saying that by presenting a carefully chosen image to the camera, a hacker can gain control of the system. This is certainly possible. I think we are just going to have to live with the fact that our systems are hackable. If you are really worried about that then you need to apply some physical security to the area around the camera to prevent people from presenting images to the camera, but that would probably defeat the purpose of the camera. – emory Dec 23 '17 at 01:36
  • @emory No, but it's far from unhackable. – forest Dec 23 '17 at 01:39
  • @emory that's right. Basically I was trying to emphasize the point that nothing is unhackable - many people would not even consider the possibility that a camera could be hacked. – Rob Watts Dec 23 '17 at 02:46
  • @RobWatts to be honest, I had not considered it and I still have no clue how to do it, but since you brought it to my attention I am sure that applying time and money to the problem would find a weakness in the camera – emory Dec 23 '17 at 03:20
  • 2
    What if the vulnerability is of a nature that allows an attacker to communicate with the machine after all? It is connected to the network since it's sending data to a destination. Now you have a device that's permanently vulnerable. If your answer is, "We'll replace all the devices," then you've actually specified a "physical patching" scheme as your answer. Your points about devices now out of your control make some sense, though. – jpmc26 Dec 23 '17 at 03:43
  • 1
    @jpmc26 I have thought about it some and tend to agree with you. Most of the time you do want to apply security patches. Some of the time you chose alternative channels (which may introduce lag). Almost never you chose not to apply security patches. – emory Dec 23 '17 at 09:48
  • @RobWatts I think this is an example of what you are referring to https://globalnews.ca/news/3654164/altered-stop-signs-fool-self-driving_cars/. Even if the car's computer is otherwise hack-proof there is a way to crash it via "graffiti" – emory Dec 29 '17 at 16:48
  • I have a wooden spoon in the kitchen that is unhackable. At least I hope so. I might be wrong. – gnasher729 Feb 13 '19 at 21:54
  • 1
    @gnasher729 too late. I have been using it to mine cryptocurrency for years. thank you for not patching your spoon. – emory Feb 14 '19 at 13:37
4

The fact that they can't think of (right now) on a way to hack it, does not mean that it is "unhackable". That is why, as a principle, we apply all security patches, even if it's on a component that shouldn't be accessible (eg. why patch a privilege escalation vulnerability if an attacker wouldn't even have user access?).

Now, they may be right, and not patching it could actually be the right decision in your case. But there are few people for which I would accept that outright. And those engineers are probably not specially knowledgeable in performing security audits.

As an argument for convincing them, I would ask them to provide access to one of these devices to anyone interested with a juicy bounty (eg. they bet their house?).

If they are uncomfortable doing that, well, then they actually don't think it's unhackable. And if they think that doing so would reveal important information, that means they rely on security by obscurity. A real unhackable system would still be hackable if the attacker knew everything about its workings.

PS: Even if they don't end up betting their houses, you would really benefit from implementing a bug bounty program.

Ángel
  • 17,578
  • 3
  • 25
  • 60
3

the engineers who designed the original product feel that the machine is not hackable

The engineers who designed the Titanic felt that it was unsinkable.

The problem in IT is that people see no need to update a system, why change a working system? These companies then make the headlines: "4 factories were closed due to the x outbreak" or "Company x has been breached, the personal details of y million customers exposed".

Imagine, IBM's cloud recently moved all customers forcefully to TLS 1.1 (YES, the already obsolete version) and some customers complained ... THOSE CUSTOMERS SHOULD BE PREPARING FOR TLS 1.3, I do not know what they are doing, and I do not care what their excuses are, they should be running TLS 1.2 EVERYWHERE! IBM back peddled, UNACCEPTABLE!

Now you can tell me that the black unicorn in the stable is preventing you from moving everything to TLS 1.2, whatever, dispose of it and do not do business with the company selling the black unicorn ... We as an industry do not do this and breaches make headlines, breaches will continue to make headlines until we learn the lesson.

thecarpy
  • 319
  • 1
  • 9
  • The problem is when the black unicorn in the stable is, for instance, the oldest client that brings in the most revenue. You can quit doing business with some vendor as long as they have a secure competitor, it is a completely different matter when it is a client. Also, Microsoft is stupid for [not allowing you to override the TLS protocol request-wise](https://stackoverflow.com/a/3795952/1739000) (or even site-wise), so essentially they are exacerbating the black unicorn problem. – NH. Dec 27 '17 at 21:24
  • I am pretty sure your customer will appreciate the fact that he is compromising his security, yours, and that of other customers of yours. The News headline story is a good argument as well. Customer is king, true, but security has to come first! – thecarpy Dec 29 '17 at 16:27
3

feel that the machine is not hackable

Feelings do not matter. Facts do.

Go back to your risk assessment and/or threat model. Look if patching or keeping the software up-to-date was part of your risk treatment plan. Look if outdated software was part of your risk analysis or threat model.

Go back to the engineers with these facts and discuss with them how the risk changes or which threats are now untreated based on the fact that the software is no longer outdated. Also consider that this particular risk will increase over time as the chance of an exploitable defect being discovered will grow. So look ahead until the reasonable end-of-life of your product.

Note that their mitigating actions might well make the risk acceptable. But this needs to be discussed and the risk plan updated. It might also be that it makes the risk acceptable today, but in a few years not anymore. What then? Instead of looking for arguments against the engineers, get on the same page with them. Yours at least realize that mitigating actions might be needed.

Tom
  • 10,124
  • 18
  • 51
3

“System is unhackable so why patch vulnerabilities?” In your question, you're trying to argue against a fallacy and an unprovable argument ("How do you know that it's 'unhackable'? Or do you just think that since you can't hack it, no one else can?"). In the end however, I think it's going to come down to a discussion on risk acceptability and who is willing to accept that risk. Try explaining it to them this way

"We have application white-listing so why do we need to patch vulnerabilities?"

Application whitelisting is only as good as the whitelist itself, the tools to block apps not on that whitelist, and assumes there are no faults or vulnerabilities in the application whitelisting tool itself. It also only protects against unknown / untrusted applications. What if the attacker decides to "live off the land" and use the systems own tools against itself? What if one of the applications you've whitelisted as part of the OS has a vulnerability

"We have a firewall so why do we need to patch vulnerabilities?" This is, effectively, the same argument as the previous one. Are you certain, absolutely, positively, 100%, beyond a smidgeon of doubt certain that there are no vulnerabilities in the network stack and / or the firewall itself nor any of the applications or services that may be listening or accessible via that network stack?

If their answers to the above are that they are 100% positive about their choices and decisions, then I would write up a document detailing their acceptance of that risk and have it signed off on by their leadership team all the way up to the CIO. Ultimately it's their (the CxO level) that are on the hook for the issue if and when the system gets breached and they're the one's who could be called before Congress (or whatever governmental oversight body they're subject to) ala the executives at Equifax were. When it's explained to the executives that they aren't doing everything in their power to keep a system updated and patched (as is required by many different credentialing and oversight groups / laws) and that they (the CxO) could be held accountable, attitudes oftentimes will shift.

Matt E
  • 31
  • 4
1

Seems simple to me. Getting back to the question of how to get a point across in argument against not patching a system thought to be unhackable. What is the worst case scenario that can happen if that system is breached? Assume all of the protections in place fail or are likewise breached. Don't bias this exercise by discluding consequences because you don't think it can or will be breached.

Now, put that worse case scenario into business impact terms of cost in the form of lost revenues, or legal/regulatory fines, or damage to the company's image in the industry.

If that impact is severe, then look your engineers in the eye and say "are you willing to put your job -- and possibly your entire career -- on the line that this will never happen? Because if it does, in the aftermath of explaining how it did happen, the conscious decision to continue using and EOL operating system and deeming patching unnecessary will be near, if not at the top, of the list."

On the other hand, if the business impact isn't that impacting, it could make sense to continue using an EOL OS. But how to best do that in a well risk-managed way is another entire topic.

Thomas Carlisle
  • 809
  • 5
  • 9
1

This may not be a technical decision at all. Using any externally-sourced component generally means you have to use that component strictly in accordance with its manufacturer's guidelines, or risk being stuck with all the consequences and liabilities arising from any failure it might be implicated in.

So if the device misbehaves, and someone is injured (or some other liabilty is incurred) then the original OS maker will say "unsupported software - not our problem". And your company's insurer will say "using out-of-support antiquated software - that's negligent, and so not our problem".

So, from your personal perspective, make sure those making the affirmative decision to continue to use outdated, unsupported components:

  • have been shown that they are doing so (and you have that in writing)
  • have affirmatively made the case that the upgrade is unnecessary (and they've made that in writing)

There's a big gap between people saying "we don't need to do this upgrade" and "I personally accept responsibility for not doing this upgrade".

In practice, there are often upgrades to components that are mandated by them having gone EOL, even if there's no actual technical needs to do so. That's a necessary part of engineering a complex product.

1

If your device has a wi-fi connection, then it can be attacked through the network. Will that attack succeed? It's a matter of the benefits of attacking the device, versus the level of effort required. Basing it on an outdated and unsupported OS definitely simplifies the attack method.

Application whitelisting is no protection, just a minor roadblock. You think a hacker can't develop an app that masquerades as one on the app whitelist? Of course they can... something they might look into if their first attempt doesn't run.

Equifax had quite a firewall in place. Didn't stop the hackers from exploiting the Struts hole that Equifax IT managers failed to patch, through a port that was left open out of necessity. A firewall just stops some of the older, obvious attacks.

Think back to the Target hack - the CEO and CIO lost their jobs over that one, and it was perpetrated by an insider, aided by Target's use of an older Windows version, no longer being updated, plus older, non secure connectivity methods on their point of sale devices. Doubtless, the CIO concluded that updating the Win version on their POS devices was too expensive, a judgment that was proven to be very wrong.

Think embedded firmware is immune to hacking? Consider the HP printer hack. HP had the clever idea of updating its printer firmware through a print job - easy to initiate. Until... someone came up with a firmware version that turned the printer into a spam relayer, and delivered it via a malware print job.

How do you do firmware updates? Through wi-fi? Yes, a hacker can replicate that... if they have a good enough reason.

A networked device can be hacked into becoming part of a botnet... a common way to launch a DOS attack. A hacker could find the vulnerability, and knowing that it would damage the company reputation, launch the attack at the same time they're shorting your company's stock. That has happened... Stealing PII and CC info isn't the only way to profit from a hack.

Now, ask yourself - what is the risk to you personally? If your system were to be hacked, can you demonstrate to the executives of your company that you exercised due diligence in identifying and mitigating potential threats, especially since you are basing the system on an OS that is no longer being updated? Hint: taking the word of engineers that say the system is 'unhackable' probably doesn't qualify as due diligence.

For that matter, if your engineers say it's unhackable, they probably aren't even looking for potential vulnerabilities, let alone mitigating them.

Anyone who says a system is unhackable just isn't being realistic. Not in this day and age.

tj1000
  • 131
  • 1
0

Depending on the resources available to you the "fool proof" (with all due respect to your colleagues) way would be to prove to them that the system is hackable. Hire somebody who can, and let him or her demonstrate the system's weaknesses. My guess is that with WLAN it should not be terribly difficult. WLAN and firewall? That's a contradictio in adjecto.

Afterthought: Perhaps it's possible to agree on payment on success (my dictionary calls that a "contingency fee"); that way the (hacking) service would always be worth the money.

-2

Each and every day we have headlines saying some system is hacked. It is not because they are neither up-to-date nor not protected with machine guns but because someone is investing time to hack them.

Most importantly, those that are well played are not done by IQ power but simple social engineering. So we are told to keep the system up-to-date because if we somehow fell into that pit hole we give the info that doesn't help them.

schroeder
  • 123,438
  • 55
  • 284
  • 319
  • 1
    This does not answer the question. The engineers are mitigating the problems. If they are, why update the OS to a later version? – schroeder Dec 22 '17 at 19:52
  • @schroeder As I mentioned earlier we do to protect the hardware from both insider and outsider intrusions as mentioned the question outward facing patches protects outsiders as they didn't know what was patches already but admin know what has been done to secure it and if he himself want screw the employer its easy for him to do so that's the reason why third-party security checks are made to avoid such disasters – Sampath Madala Dec 22 '17 at 20:44
  • 1
    It's impossible to mitigate a completely unknown risk. – barbecue Dec 24 '17 at 23:53
  • Upvoted for mentioning social engineering attacks. Vulnerabilities can be to social attacks as well as automated ones. – barbecue Dec 25 '17 at 00:31