11

I was studying different WAFs, from open-source (such as ModSecurity and NAXSI) to commercial solutions (Imperva, Citrix, Fortinet, etc.). Many people state that having a whitelist-based WAF is far more efficient than blacklist.

I basically understand why a blacklist can be obsolete (even if in the case of bots it can be pretty good), and how a whitelist resolves those issues. But it’s difficult to find a detailed explanation of why whitelists should be used instead of blacklists.

Question: Why I should use whitelist instead of blacklist with a WAF?

Explanations and/or links to papers would be most welcome.

Stephane
  • 18,557
  • 3
  • 61
  • 70
Quentin Mollard
  • 111
  • 1
  • 5

4 Answers4

25

Blacklisting is good if you have omniscient knowledge of every single vulnerability that could ever exist for a single product. I am assuming you do not have infinite knowledge, so you'd be in a constant fight to blacklist the next threats.

A whitelist is good if you have the expected behaviour and inputs for a product. Eg. you expect users to visit the site and input numbers in a text box. You restrict the text box input to numbers only, so all possible exploits involving letters and symbols are explicitly disallowed.

Whitelists can protect you from the future. Blacklists can protect you from the past.

Ohnana
  • 4,737
  • 2
  • 23
  • 39
  • 1
    Some WAF includes a autolearning functionality for learning what is going to be whitelisted. For example, you run the application in an isolated environment while the WAF is learning and after this phase you put the WAF in production to block every behaviour that is not whitelisted. – kinunt Jan 21 '15 at 14:55
  • Manually creating a whitelist seems silly. All you're doing is duplicating the validation logic from the application in the firewall. That's practically guaranteed to get out of sync. – CodesInChaos Jan 21 '15 at 16:53
  • 1
    Whitelisting because: "You don't know what you don't know". Think about it. – k1DBLITZ Jan 21 '15 at 18:56
2

If you are able to design and maintain a whitelist describing exactly all legal inputs all your hosted applications may possibly receive, then it will bring you a supplementary security since it will have better ability to block unknown attacks.

If you do not satisfy the above prerequisite, a blacklist may be a better choice:

  • The list is already available, it will be quicker for you to deploy,
  • The list will be maintained by the upstream vendor / community, you just need to ensure you update it regularly to be confident it should work as expected and benefit from experience and knowledge from a large company / community (the more people involved, the less chances there are for the list to contain an error),
  • A blacklist containing all patterns known to be used at least by automated tools will bring you more security than a too permissive whitelist where some inputs channels were missed or were made too permissive to avoid to block complex data (Google Analytics cookies seems to be a classical example).

All in all, a well done and correctly maintained whitelist will indeed bring better security. However, since it is specific, all the work rely only on you. When you are not able or confident in doing and maintaining the list, a general blacklist done and maintained by an upstream company / community may be a better choice.

WhiteWinterWolf
  • 19,082
  • 4
  • 58
  • 104
2

In this guest post on our blog by John Stauffacher, a world renowned expert in web application security, and the author of Web Application Firewalls: A Practical Approach... John recommends...

The best approach to web application security is to whitelist the good rather than to blacklist the bad.

Why? It is far simpler to enumerate all that is good within your application than it would be to continually update all of the bad that could possibly be thrown at your application. Your routes, cookies, parameters (and their values) are all known to your organization. Using this information you can create a proposed ‘whitelist’ of all the correct points of entry, cookies, parameters, and values for your application. This whitelist can become your baseline for the application, and any traffic that deviates from this baseline can be considered bad traffic.

A whitelisting approach is far more secure and efficient than continuously enumerating ‘the bad’ in your Web traffic. The bad changes on a daily basis. Web teams that rely on blacklisting find themselves behind the eight ball, chasing the latest zero-day threat and spending countless hours listing every attack vector known to man, writing and updating rules in their WAF and driving themselves crazy. In the end, their WAF becomes a list of attack signatures that looks into the past and fails to stop new threats.

So while the initial process of establishing a whitelist requires a bit more upfront time than blacklisting, you gain a more proactive and robust WAF security stance that doesn’t have to play catch-up with every zero-day threat that comes down the pike.

Include whitelisting as part of your standard Web application security practice, and make sure to update your list on a regular basis. You’ll be glad you did.

paj28
  • 32,736
  • 8
  • 92
  • 130
1

Isn't a whitelist going to give better performance? The whitelist for any given WAF will be quite short - whereas the blacklist for all threats out on the internet is very long. Therefore the time & CPU taken to check that an input exists in the whitelist will be much smaller and the load on the WAF will be smaller and the performance of your application will be higher.

brendan
  • 111
  • 3