25

Recently I've been reading about Web application firewalls and the fact that they protect against most frequent attacks like injections, XSS or CSRF.

However, a good application should already be immune against these exploits, so why do companies prefer buying those expensive devices to try to protect (WAFs aren't perfect either) apps with security flaws instead of fixing those security flaws in the first place ?


Thanks for your detailed answers, I never thought such a newbie question would get so much attention.

  • 10
    "if the application is secure" - it's not. – user2357112 Mar 21 '14 at 04:06
  • @user2357112 But if it's not, the chances of some dumb firewall detecting the attack by itself is negligible as well. And if we have to do it manually, redeploying an application with a minor fix like this shouldn't take longer than adding a rule to the firewall. – CodesInChaos Mar 22 '14 at 15:32
  • Mitigating DoS is one of the few areas where such a firewall could be useful. – CodesInChaos Mar 22 '14 at 15:36

9 Answers9

32

When deploying security, it is often a good idea to apply multiple layers. Just because you have a lock on your bedroom door doesn't mean you don't put one on the front door to your house. You may also apply a generic set of WAF rules in front of multiple applications.

A WAF may be part of a larger suite for IDS/IPS, it could also help with the performance of the application if the WAF is inline so that the application doesn't waste resource on the blocking, logging, db queries, etc.


You also make an assumption that the organization has the resources and skill to gain reasonable assurance about their application's security. If it's a third party application or has third party modules, those components may not be easily upgraded or it may be closed source or against the license to modify the program.

Eric G
  • 9,691
  • 4
  • 31
  • 58
  • 2
    I'd like to add that "secure" may be easily contested. It might make sense to get the WAF to protect against generic attacks that are hard to defend against on the application layer (e.g., unwanted spiders) – freddyb Mar 21 '14 at 13:34
  • 1
    Additionally, fixing an application that is already deployed in production may be difficult due to long release cycles (by the time security is identified by the company, queued to engineering, tested, scheduled for deployment, downtime allocated, updated, deployment verified, app back online - a lot of time can pass) So WAF can be a quick way to guard application until it is properly patched (especially 0-day type vulns). Another purpose is that WAF rules can be written by one (supplier) and used by many rather than each monitoring and writing their own, thus increasing efficacy. – LB2 Mar 21 '14 at 14:19
  • 1
    Complexity is the worst enemy of security, and a poorly written security layer can be source of your comprise. Buffer overflows in anti-viruses, and WAFs that introduce SQL Injection. – rook Sep 08 '14 at 13:38
8

Many organizations are saddled with legacy applications written by developers who are long since gone, WAFs are a way for that organization to protect itself from attacks against those applications.

WAFs are also much faster in deploying fixes. It can take weeks or months to update complex applications, WAFs often have their protection updated in hours.

It's also cost versus benefit, some WAFs are very good at protecting applications, so why spend millions re-writing legacy applications that are going to phased out in a year?

GdD
  • 17,291
  • 2
  • 41
  • 63
  • 4
    +1 for rapid fixes. I've been in companies where it would take weeks to fix a flaw in the code but minutes to provide the same protection on the WAF. The WAF is NOT a replacement for the code fix, but it is an effective stop-gap. – schroeder Mar 20 '14 at 18:29
6

No. In fact, implementing a WAF increase the attack surface making your infrastructure vulnerable now to also the attacks against the WAF.

Deploying a WAF is a pragmatic measure because you suppose that the application may have vulnerabilities that the WAF may protect against, but here we are in a field where nobody is completely sure of nothing and administrators do what they feel is right.

In my opinion, the really right thing to to is implement in the site the necessary security measures and processes. If you have this control over the application code and development process you don't need a WAF. But this situation is not always possible.

kinunt
  • 2,759
  • 2
  • 23
  • 30
  • Completely agree except with the last part. It's always possible to get the source code. It's always possible to get physical access to the systems running the app. – atdre Apr 20 '15 at 10:38
  • @atdre .It is not possible in all cases to get the source code off the box. For example compiled third party code, minified javascript, even running VM's on a cloud provider. and also the build scripting/unit tests are often very complex and rarely deployed. It could take months to rebuild a systems build ecosystem. – Andrew Russell Nov 11 '21 at 02:30
  • @AndrewRussell Never had an issue with any of those barriers. In my last excursions into "getting the source code", I just EDR'd in, grabbed the WAR files, and then used some Java and JSP decompilers before submitting it all to Checkmarx. I admit that finding a JSP decompiler was difficult, and it did take about an hour for just that piece, but well worth it in the long run – atdre Nov 11 '21 at 18:50
5

No but only few applications are completely secure. A WAF is a way of mitigating attacks before they actually reach your application. Furthermore you can easily identify malicious users and automatically block them.

WAFs aren't meant to fix your application, they are there to prevent and sometimes mitigate attacks. If your application is secure, but the language it is written in is not then sometimes mitigating actions can be taken to prevent attacks until a fix is released.

Lucas Kauffman
  • 54,169
  • 17
  • 112
  • 196
4

Organizations have to look at the capabilities WAFs can provide that traditional web applications do not provide (or, are generally not coded to provide).

For example, WAFs generally have some type of "response" mechanism built-in. In the event of an attack, they can automatically respond to protect the application. This can include brute-force protection, DOS (to a degree), and banning requests from certain IP addresses. You could code your application to do this, but a WAF is at your perimeter. It is best to stop malicious traffic there, then further in your network. Furthermore, a network-based WAF can protect several websites, maybe cutting down on the development time required.

One key benefit is around the detection of attacks/logging. If your WAF is detecting an attack, it can pass that information along to a SIEM solution. The WAF has signatures to detect attacks against a variety of backends, not just the one you built. Your security staff could then use that information to determine a best course of action. Maybe they correlate it with other attacks that are happening, etc.

Another key feature is that the WAF can be used to protect the web server as well as the web application. For example, WAFs can be configured to stop buffer overflow attacks against IIS itself. Your web application cannot do this.

Lastly, WAFs can be used to do "virtual patching". For example, say you find out that your web application has a security hole if sent a particular request. You could, of course, change the code. But this may take time (change management, getting the developer to write something, testing, etc...). While you wait for a patch from the development team, a signature could be created to "protect" the web site from that attack vector.

One thing to add is along the lines of @Lucas Kauffmans answer. Security is all about layers. You cannot know for sure that your web application is "completely" secure. Adding another layer in front of that doesn't hurt.

WAFs have been a hot topic since they were first introduced with many security folks on either side of the "need it / don't need it" debate. I think that it all comes down to the capabilities you require for your given situation.

CtrlDot
  • 472
  • 2
  • 6
2

WAFs are a reaction to the irresponsibility of allowing everything to be done at a web level. Put it this way: previously we had services running in different ports. Soon enough there was a need to create firewalls to block certain services from being indiscriminately open to anyone who wished to probe it. So services were being filtered and the only thing you would be allowed was, say, through port 80. So what do people start doing? Making services available via port 80. Now you have the option of using services via port 80 that before would have been filtered via a normal firewall over their specific ports.

History seems to repeat itself: people create unsecure services, security-minded people put security restrictions in place, which trades off usability, so people try to go around and open things via a different means (in this case, “let’s put everything via 80”); this in turn forces the security-minded people to revisit the topic and, in this case, having to adapt the firewall for the web too. This is a constant trade-off battle between security vs. usability.

Thus, asking if one should be using a WAF nowadays is the same as asking if one should use a firewall fifteen years ago.

Lex
  • 4,247
  • 4
  • 19
  • 27
1

Notes on Lifecycles and time to deploy.

As mentioned above, the Lifecycles of Applications impact the time to fix substantially.

Web Applications in a Corporation or other organisation come in all shapes and sizes.

  • Commercial off the shelf, currently under active support.
  • Commercial off the shelf, old and out of date, versions behind.
  • Commercial off the shelf, unsupported by vendor.
  • Self developed, currently under active support.
  • Self Developed but with no support crew or outsourced support with cost incurred.
  • Open source with no support agreement / no patches.
  • Custom web-application without current support agreements.

And The Web Application may have different uses in an Organisation.

  • Critical / Core system.
  • Important system.
  • One-Off system without current importance.
  • Legacy system without clear business owner.
  • Quick and Dirty deployment without management oversight.

So with all those variables it can take a long time / lot of effort to:

  • Investigate security issues and upgrade path, determine impact of upgrade.
  • Start up a dev team / agree a contract / get business funding.
  • Get to speed on the application.
  • Get the security fix developed / tested and regression tested.
  • Then to deploy it, ensure support arrangements etc.

So using a Web App firewall can cut through all those layers and implement a fix quickly without a lot of money / time / effort.

Andrew Russell
  • 3,633
  • 1
  • 20
  • 29
0

If it's 100% secure than you won't need firewall it in theory. In practice you can't be 100% certain that app isn't vulnerable. You should give the app only the permissions it needs. Like you have a search app have only be able to search a certain table. It won't stop an attack but it will limit one.

0

Always, WaF'a add an extra layer of security which can be updated as new vulnerabilities are discovered.. your application may be secure today but broken tomorrow

Sam Aldis
  • 73
  • 7