29

I'm working on a thesis about the security hacker community.

When a 0day is published, how can an administrator secure his application/website between the time the 0day is published and the patch is developed ?

Moreover, most of the time, this same 0day is used for months by blackhats, so are the blackhats ahead of whitehats ?

K.Fanedoul
  • 417
  • 4
  • 10
  • 4
    As a classification of their activities, yes. WHs locate vulnerabilities and often recommend solutions, but they are not typically the ones expected to deploy the solutions. They are external people given permission to test. A web admin is not classed as a "whitehat". – schroeder Dec 13 '18 at 09:48
  • I understand that they can be external but in this case, who secure the application after the testers did their job ? The admin ? – K.Fanedoul Dec 13 '18 at 09:54
  • 1
    Yes, the admin. – schroeder Dec 13 '18 at 09:58
  • Your latest edit changes the question sufficiently that the existing answer no longer applies. – forest Dec 13 '18 at 10:04
  • 3
    There's always one way to prevent every security issue immediately: Shutting down the system. – Fabian Röling Dec 13 '18 at 10:12
  • 34
    @FabianRöling That's a common misconception. Security involves the CIA triad: Confidentiality, Integrity, and Availability. Violation of any one of those is considered a security problem. Shutting down a system completely eliminates availability. It's effectively becomes a DoS born from the fear of bugs. – forest Dec 13 '18 at 10:59
  • That's true. Shutting down is only viable if it's e.g. a bug leaking user logins in a banking system. Then shutting down for a day is still a giant loss, but less than not shutting down. – Fabian Röling Dec 13 '18 at 13:12
  • 4
    @FabianRöling I think forest's point is if you need to pull the plug on your system, you haven't prevented a security issue. Even though it could've been worse, you still got hacked. – Lord Farquaad Dec 13 '18 at 22:29
  • 3
    If it's been published, it isn't exactly a 0day, is it? – Acccumulation Dec 14 '18 at 22:07
  • @K.Fanedoul Be careful in your definitions; [hacker community](http://www.catb.org/jargon/html/H/hacker.html) is a claimed term. I recommend being clear with "security hacker community". – wizzwizz4 Dec 16 '18 at 09:54

9 Answers9

44

The person who discovers a security issue often reports it to the software vendor or developer first. This gives the software vendor time to fix the issue before publication. Then, after it is fixed, the bug is publicly disclosed. This process is called responsible disclosure.

Sometimes, someone doesn't disclose the zero-day to the software vendor but uses it to hack other systems. Doing this can tip off security companies and disclose the bug, burning the zero-day.

I don't think your statement "most of the time, this same 0day is used for months by black hats" is true. This is true for some security issues, but a lot of zero-day bugs are found for the first time by white-hat hackers. I wouldn't say black hat hackers are ahead of white hat hackers. They both find security issues and some of these overlap. However, the offense has it easier than the defense in that the offense only needs to find one bug, and the defense needs to fix all the bugs.

Ben
  • 103
  • 3
Sjoerd
  • 28,707
  • 12
  • 74
  • 102
  • 1
    Thank's for the answer, I said that : "most of the time, this same 0day is used since months by black hats" because i have read a lot of black hats interview saying that they are using those 0day way before any publication – K.Fanedoul Dec 13 '18 at 08:57
  • 3
    @pjc50 It is absolutely true that blackhats use 0days months (or years) before they are patched. – forest Dec 13 '18 at 10:06
  • Apparently zero-days _can_ have very long lifetimes; the paper referred to in this presentation claims zero-days have an average life expectancy of almost 7 years: https://www.youtube.com/watch?v=8BMULyCiSK4 – You Dec 13 '18 at 10:16
  • 12
    @You: I'd take that number with a huge grain of salt. Just like pretty much any software bugs, most security issues that would have otherwise qualified as 0day are often fixed hours or days after said bug was introduced to a released version of the software, usually without much fanfare, but these never make news (or security trackers) because they don't affect many people. The 0day that tend to make news are those that lives the longest, so there's a massive selection bias. – Lie Ryan Dec 13 '18 at 10:57
  • @LieRyan: Absolutely, and they say as much in the presentation (because the sample size is pretty small, definitions may vary, etc.) -- but it's still more data than the purely speculative claim in the answer. – You Dec 13 '18 at 11:07
  • 1
    @You: IMO, it's a meaningless number, and misleading. It's based on convenience sample of whatever bugs makes biggest news and as we expect the age of bugs will go down rapidly as you increase the sample size. That doesn't sound like meaningful statistics as the number doesn't converge. You can just choose almost any number you wanted by picking where to stop adding to the sample. – Lie Ryan Dec 13 '18 at 11:15
  • @You 7 years until a patch is released or most systems are patched? Presumably the latter. I have a hard time believing that black hats can keep a working 0-day exploit secret from the rest of the world for 7 years, especially since many white/gray would hats traverse the same channels black hats do. – Niklas Holm Dec 13 '18 at 12:24
  • 1
    @NiklasHolm That's time from initial discovery to detection by outside party (~31 minutes into the talk). – You Dec 13 '18 at 12:32
  • This is mostly a comment. It does not answer the question. – Tyler Durden Dec 17 '18 at 07:51
34

When a 0day is published, how can an administrator secure his application/website between the time the 0day is published and the patch is developed ?

They use temporary workarounds until a patch rolls out.

When news of a 0day comes out, there are often various workarounds that are published which break the exploit by eliminating some prerequisite for abusing the vulnerability. There are many possibilities:

  • Changing a configuration setting can disable vulnerable functionality.

  • Turning off vulnerable services, when practical, can prevent exploitation.

  • Enabling non-default security measures may break the exploit.

Every bug is different, and every mitigation is different. An administrator with a good understanding of security can figure out workarounds on their own if sufficient details about the vulnerability are released. Most administrators, however, will look to security advisories published by the software vendor.

Sometimes, an administrator doesn't have to do anything. This can be the case if the vulnerability only affects a non-default configuration, or a configuration which is not set on their systems. For example, a vulnerability in the DRM video subsystem for Linux need not worry a sysadmin with a LAMP stack, since their servers will not be using DRM anyway. A vulnerability in Apache, on the other hand, might be something they should worry about. A good sysadmin knows what is and isn't a risk factor.

Moreover, most of the time, this same 0day is used for months by blackhats, so are the blackhats ahead of whitehats ?

Whitehats are more sophisticated, but blackhats are more efficient.

Whether or not blackhats are ahead of whitehats is a very subjective question. Blackhats will use whatever works. This means their exploits are effective, but dirty and, at times, unsophisticated. For example, while it is possible to discover the ASLR layout of a browser via side-channel attacks, this isn't really used in the wild since ubiquitous unsophisticated ASLR bypasses already exist. Whitehats on the other hand need to think up fixes and actually get the software vendor to take the report seriously. This does not impact blackhats to nearly the same extent, as they can often start benefiting from their discovery the moment they make it. They don't need to wait for a third party.

From my own experience, blackhats often have a significant edge. This is primarily because the current culture among whitehats is to hunt and squash individual bugs. Less emphasis is put on squashing entire classes of bugs and when it is, sub-par and over-hyped mitigations are what are created (like KASLR). This means blackhats can pump out 0days faster than they can be patched, since so little attention is given to the attack surface area and exploitation vectors that keep being used and re-used.

forest
  • 64,616
  • 20
  • 206
  • 257
  • 6
    Another important difference is that the whitehats often have to convince the software vendor to fix the issue and find a fix/mitigation technique. The blackhats don't have to care about that. – Lie Ryan Dec 13 '18 at 10:45
  • @LieRyan Great point! That is very true. – forest Dec 13 '18 at 10:45
  • 2
    If I may add the most effective temporary workaround: turn the servers off. I find it useful to remember that that is a workaround which makes the server (almost) perfectly secure because that immediately leads to the discussion of security vs usability, which is a very important discussion when applying more reasonable workarounds (like the ones you listed). If the balance of security and usability is intuitive, it's kind of pointless to bring this silly workaround up, but if it isn't intuitive for someone, it may provoke thought. – Cort Ammon Dec 13 '18 at 17:05
10

When a zero-day is released or published, it comes with more than just a fancy name and icon. There are details about how the zero-day is used to exploit the system. Those details form the basis of the defender's response, including how the patch needs to be designed.

For example, with WannaCry/EternalBlue, the vulnerability was found by the NSA and they kept the knowledge to themselves (the same happens in the criminal community where vulnerabilities can be traded on the black market). The details were leaked, which informed Microsoft how to create the patch and it also informed administrators how to defend against it: disable SMBv1 or at least block the SMB port from the Internet.

That's how admins protect themselves. Patching is only one part of "vulnerability management". There are many things that an admin can do to manage vulnerabilities even if they cannot or do not want to patch.

In the WannaCry case, the NHS did not patch, but they also did not employ the other defenses that would have protected themselves.

One large part of my job is designing vulnerability mitigations for systems that cannot be patched for various business reasons. Patching is the better solution, in general, but sometimes it just isn't possible at the time of patching.

... are the blackhats ahead of whitehats?

That poses an interesting problem. If a blackhat finds a problem and only shares it with other blackhats (or other members of the intelligence community), does that mean that blackhats, in aggregate, are ahead of whitehats? Yes. Once a zero-day is exposed, it loses its power (that's the whole point of disclosing it). So to keep it secret, gives it power.

Are blackhats better skilled or use better techniques than whitehats? No. But the shared secrets gives blackhats more power, in aggregate.

schroeder
  • 123,438
  • 55
  • 284
  • 319
  • I disagree that shared secrets give blackhats more power in aggregate. While there is trading of information in the underground, it's highly localized. I believe it's the culture that prioritizes bug hunting (as opposed to mitigation research) which gives an edge to blackhats. You may fix one bug that I used ROP to exploit, but your lack of effective and ubiquitous CFI means I'll find another in no time. – forest Dec 13 '18 at 11:01
  • The fact that the utility of a zero-day is largely tied to how long it remains a secret is part of the main argument in favor of the Full Disclosure policy, vs Responsible/Coordinated Disclosure: https://en.wikipedia.org/wiki/Full_disclosure_(computer_security) Full Disclosure to everyone effectively burns the zero-day immediately – Chris Fernandez Dec 13 '18 at 19:10
  • 1
    @ChrisFernandez Full disclosure is good when the software vendor doesn't do timely updates and doesn't listen to security researchers. In that case, full disclosure empowers users to defend themselves with workarounds. When the vendor is responsive and actually cares about security, then responsible disclosure may be better, since they won't sit on the bug for ages. – forest Dec 14 '18 at 03:17
  • Full disclosure will kill unresponsive and incompetent vendors. If used since a long time full disclosure would have eliminated most vendors who think they could do quality control after bringing to market a software. This is the way bad actors are eliminated of any other industrial sector. – dan Dec 17 '18 at 15:26
6

When a 0day is published, how can a whitehat secure his application/website between the time the 0day is published and the patch is developed?

Sometimes there are workarounds which fix or mitigate the problem.

  • Sometimes you can disable some feature or change some setting in the software which causes the exploit to not work anymore. For example, infection with the Morris Worm from 1988 could be prevented by creating a directory /usr/tmp/sh. This confused the worm and prevented it from working.
  • Sometimes the exploit requires some kind of user interaction. In that case you can warn the users to not do that. ("Do not open emails with the subject line ILOVEYOU"). But because humans are humans, this is usually not a very reliable workaround.
  • Sometimes the attack is easy to identify on the network, so you can block it with some more or less complicated firewall rule. The Conficker virus, for example, was targeting a vulnerability in the Windows Remote Procedure Call service. There is usually no reason for this feature to be accessible from outside the local network at all, so it was possible to protect a whole network by simply blocking outside requests to port 445 TCP.
  • Sometimes it is viable to replace the vulnerable software with an alternative. For example, our organization installs two different web browsers on all Windows clients. When one of them has a known vulnerability, the admins can deactivate it via group policy and tell the users to use the other one until the patch is released.
  • As a last resort, you can simply pull the plug on the vulnerable systems. Whether the systems being unavailable causes more or less damage than they being online and open to exploits is a business consideration you have to evaluate in each individual case.

But sometimes none of these is a viable option. In that case you can only hope that there will be a patch soon.

Moreover, most of the time, this same 0day is used for months by blackhats, so are the blackhats ahead of whitehats?

It happens quite frequently that developers / whitehats discover a possible security vulnerability in their software and patch it before it gets exploited. The first step of responsible disclosure is to inform the developers so they can fix the vulnerability before you publish it.

But you usually don't hear about that in the media. When point 59 of the patch notes for SomeTool 1.3.9.53 reads "fixed possible buffer overflow when processing malformed foobar files" that's usually not particularly newsworthy.

Philipp
  • 48,867
  • 8
  • 127
  • 157
  • I believe your Morris worm example is a poor one. The Morris worm used several vulnerabilities for jumping between and infecting systems, of which the fingerd flaw was one. (There was also at least sendmail's debug mode, and common user account passwords.) If I recall correctly, the real trick to defuse that one was to `mkdir /tmp/sh`. – user Dec 13 '18 at 10:57
  • Good point about turning the machine off being a reasonable business decision sometimes. – trognanders Dec 16 '18 at 08:45
3

Another key defense is monitoring, and knowing your system.

Where are your valuable secrets, and who has access to them.

If someone tries to connect to your mail server on port 80, red flag.

Why is the mail server, all of a sudden, sending traffic to an unusual IP.

The mail server now has 10x the traffic why?

Monitor people connecting to your external IP's addresses. Drop and/or block all external ports and protocols that are not in use.

No legitimate user is going to connect to your web server on anything but 80 or 443. Unless you have added additional services. You might consider blocking those IP for some time. Sometimes, IP are part of dynamic pools, and you can't always solve a problem with a blacklist, then you just drop the packets.

If your business only does business in 1 country, maybe you should just block all other countries.

You can use whois to find the global owner of the IP address range, and if present use the administrator contact information to notify the owner. They can track it down on their end. (Its worth a try)

You should get notified when any system gets contacted by another system in any unexpected way. After first you may have a ton of notification, but if the computer(s) is on your network then you can investigate both sides. Then either eliminate it or white list it as expected traffic.

These monitor tools will also notify you about port scans, unless you have an authorized security team no one else should be port scanning.

Watch for regular events, and if they stop mysteriously why?

Check the machine for infections. If services are disabled you should be notified in advance so the changes will be expected and not mysterious.

Block as much as possible and monitor the rest.

Now once you have an attack you need to do something about it.

Sometimes turning the system off temporarily is the only option. Maybe you need to block their IP address for awhile.

You still have to protect and monitor all your legitimate services.

In addition to monitoring the community for vulnerability announcements. You should have penetration testers to find the bugs in advance before the hackers. Then you have a chance to mitigate the attack on your terms. Notifying the maintainer of the effect system so they can patch it. If its open source, you can have someone patch it for you.

Intrusion detection systems, and snort can also examine and potentially block incoming hacks by detecting suspicious patterns.

You may have to find an alternate product to replace the vulnerable one depending on the severity of the problem.

As always keeping your software up to date helps to protect you.

This way you can block suspicious activity, until you determine its legit.

cybernard
  • 518
  • 2
  • 10
2

Most potential exploits require a chain of vulnerabilities in order to be executed. By reading the as-yet unpatched zero-day, you can still identify other vulnerabilities or pre-conditions that the zero-day would require.

To defend against threat of (say) an RDP attack from outside the network (zero-day RDP authentication failure published), do not allow RDP from off-site. If you don't really need RDP from outside, then this is a chance to correct an oversight. Or, if you must have RDP from off-site, perhaps you can identify a whitelist of IPs from which to allow these connections, and narrow the aperture in the firewall.

Likewise, to defend against an inside (and to some extent outside) RDP threat, limit the ability of A) users to execute RDP, B) machines to execute RDP, C) the network to pass RDP, D) machines to accept RDP, E) users to allow RDP. Which VLANs should have the ability to generate outbound RDP? Which machines should be able to do this? And so forth.

Every one of these steps, in both the outsider and insider scenarios, works to harden your network against an RDP authentication exploit even without a patch.

A defense-in-depth mentality allows you to break the chain of vulnerabilities / conditions required for even an un-patched zero-day to be countered. Sometimes.

I have intentionally chosen a fairly easy problem here just to illustrate the point.

Source -- I have done this before.

2

Relatively few hacks allow the attacker to break into a system. Most are "privilege escalation" bugs that allow an attacker to have greater control over the system after they have access to it. There are so many ways to achieve administrative control of a machine once a hacker has access to it, that it is more or less a waste of time to try to secure a machine against privilege escalation. Your best policy is to focus on preventing hackers from getting inside in the first place and monitoring your network for intrusion.

Nearly all intrusions come from just three methods. You want to spend all your available cyber defense resources defending against these. They are:

(1) Phishing emails containing poisoned PDFs or PPTs. There are tons of zero days targeting PDFs and PPTs, and the nature of both these application formats is such that there is more or less no way to secure yourself against a contemporary trojan in either one. Therefore, you basically have two options: require all PDF/PPT attachments to go through a vetting process, which is not practical for most organizations, or to train your employees to vet emails themselves which is the best option in most cases. A third option is to test all PDFs and PPTs sent to the organization in a sandboxed environment after the fact, but this is only possible for advanced organizations, like the military, not the average company. Option 3 of course does not prevent the intrusion, it just warns you immediately if one occurs.

(2) Browser vulnerabilities. The vast majority of browser-based exploits target Internet Explorer, so you can defend probably 95% of these just by preventing users from using IE and requiring them to use Chrome or Firefox. You can prevent 99% of browser based exploits by requiring users to use NoScript and training them in its use, which unfortunately is not practical for most organizations.

(3) Server vulnerabilities. An example would be the NTP bug from a few years back. You can largely defend against these by making sure that all company servers are running on isolated networks (a "demilitarized zone") and that those servers are tight and not running unnecessary services. You especially want to make sure that any company web servers are running by themselves in isolated environments and that nothing can get into or out of those environments without a human explicitly doing the copy in a controlled way.

Of course there are lots of exploits that fall outside these categories, but your time is best spent addressing the three classes of vulnerabilities listed above.

Tyler Durden
  • 1,116
  • 1
  • 9
  • 18
1

Well, its ok to have 0 days from an attacker, the problem is how many zero days they have and how much does it costs for them to burn all the 0 days in your network.

If you don't have patches up to date, it lowers the cost for an attacker to develop a kill chain.

When you think about it, how would you send start attacking a network? Let say you start with a phishing attack / waterhole attack.

If it is a waterhole attack, you might need to find a 0 day in flash which allows you to execute code in the browser, and then you might need to break out of the browser sandbox first, which requires another 0day. And next you might face appcontainer, which requires another exploit to reach OS level privilege. And there are protection mechanism such as SIP in macOS, it means even if you have root access, you cant access important memory. That means you need another 0day kernel exploit. If it is running windows 10 with cred guard and you are targeting Lsass.exe, then you might need another 0day to attack the hypervisor.

So it turns out the attack is very expensive and requires a lot of research effort, and in the meantime while you exploiting them, you might trigger security alert.

So as a defender, make sure you know your network well, have defence controls in every single layer and you should be able to defend against 0 day attack.

  • ``Well, its ok to have 0 days from an attacker, the problem is how many zero days they have and how much does it costs for them to burn all the 0 days in your network.`` I mean, it is not really okay to have 0-day vulnerabilities if that is what you're suggesting, but yes every written code has bugs in them and they should be fixed. Having any vulnerabilities is not okay and they should be patched, even if it is expensive to abuse them. – Kevin Dec 14 '18 at 03:28
  • @KevinVoorn YEa agree, thats why I said `If you don't have patches up to date, it lowers the cost for an attacker to develop a kill chain.` Patching is still very important, you just can't stop someone having 0day –  Dec 15 '18 at 03:23
1

The problem is not only with zero-days. There are plenty of companies which still drag on 200-days patches for a multitude of reasons (some good, some bad).

You have a large list of solutions, another one is to use virtual patching. It usually creates a mitigation for the issue before it hits the service (I learned about it years ago though a Trend Micro product - no links with them but I tested it and it mostly worked).

WoJ
  • 8,957
  • 2
  • 32
  • 51