82

I have a server with apache and I recently installed mod_security2 because I get attacked a lot by this:

My apache version is apache v2.2.3 and I use mod_security2.c

This were the entries from the error log:

[Wed Mar 24 02:35:41 2010] [error] 
[client 88.191.109.38] client sent HTTP/1.1 request without hostname 
(see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:)

[Wed Mar 24 02:47:31 2010] [error] 
[client 202.75.211.90] client sent HTTP/1.1 request without hostname 
(see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:)

[Wed Mar 24 02:47:49 2010] [error]
[client 95.228.153.177] client sent HTTP/1.1 request without hostname
(see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:)

[Wed Mar 24 02:48:03 2010] [error] 
[client 88.191.109.38] client sent HTTP/1.1 request without hostname
(see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:)

Here are the errors from the access_log:

202.75.211.90 - - 
[29/Mar/2010:10:43:15 +0200] 
"GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 392 "-" "-"
211.155.228.169 - - 
[29/Mar/2010:11:40:41 +0200] 
"GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 392 "-" "-"
211.155.228.169 - - 
[29/Mar/2010:12:37:19 +0200] 
"GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 392 "-" "-" 

I tried configuring mod_security2 like this:

SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind"
SecFilterSelective REQUEST_URI "\w00tw00t\.at\.ISC\.SANS"
SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS"
SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:"
SecFilterSelective REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:\)"

The thing in mod_security2 is that SecFilterSelective can not be used, it gives me errors. Instead I use a rule like this:

SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind"
SecRule REQUEST_URI "\w00tw00t\.at\.ISC\.SANS"
SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS"
SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:"
SecRule REQUEST_URI "w00tw00t\.at\.ISC\.SANS\.DFind:\)"

Even this does not work. I don't know what to do anymore. Anyone have any advice?

Update 1

I see that nobody can solve this problem using mod_security. So far using ip-tables seems like the best option to do this but I think the file will become extremely large because the ip changes serveral times a day.

I came up with 2 other solutions, can someone comment on them on being good or not.

  1. The first solution that comes to my mind is excluding these attacks from my apache error logs. This will make is easier for me to spot other urgent errors as they occur and don't have to spit trough a long log.

  2. The second option is better i think, and that is blocking hosts that are not sent in the correct way. In this example the w00tw00t attack is send without hostname, so i think i can block the hosts that are not in the correct form.

Update 2

After going trough the answers I came to the following conclusions.

  1. To have custom logging for apache will consume some unnecessary recourses, and if there really is a problem you probably will want to look at the full log without anything missing.

  2. It is better to just ignore the hits and concentrate on a better way of analyzing your error logs. Using filters for your logs a good approach for this.

Final thoughts on the subject

The attack mentioned above will not reach your machine if you at least have an up to date system so there are basically no worries.

It can be hard to filter out all the bogus attacks from the real ones after a while, because both the error logs and access logs get extremely large.

Preventing this from happening in any way will cost you resources and it is a good practice not to waste your resources on unimportant stuff.

The solution i use now is Linux logwatch. It sends me summaries of the logs and they are filtered and grouped. This way you can easily separate the important from the unimportant.

Thank you all for the help, and I hope this post can be helpful to someone else too.

Saif Bechan
  • 10,892
  • 10
  • 40
  • 63

11 Answers11

34

From your error log they are sending a HTTP/1.1 request without the Host: portion of the request. From what I read, Apache replies with a 400 (bad request) error to this request, before handing over to mod_security. So, it doesn't look like your rules will be processed. (Apache dealing with it before requiring to hand over to mod_security)

Try yourself:

telnet hostname 80
GET /blahblahblah.html HTTP/1.1  (enter)
(enter)

You should get the 400 error and see the same error in your logs. This is a bad request and apache is giving the correct answer.

Proper request should look like:

GET /blahblahblah.html HTTP/1.1
Host: blah.com

A work around for this issue could be to patch mod_uniqueid, to generate a unique ID even for a failed request, in order that apache passes the request on to its request handlers. The following URL is a discussion about this work around, and includes a patch for mod_uniqueid you could use: http://marc.info/?l=mod-security-users&m=123300133603876&w=2

Couldn't find any other solutions for it and wonder if a solution is actually required.

nash
  • 103
  • 4
Imo
  • 841
  • 5
  • 7
  • I see the problem now. Do you recommend the solution provided in the article, or do you think its better to just leave it as it is. It is a scanner for any back-doors in the system. If I leave it just scanning, i could one day get attacked. – Saif Bechan Mar 29 '10 at 14:10
  • 1
    Hello Saif, I think as long as you keep your apache installation up-to-date with your distributions (or manual) security patches you should be fine. A poorly structured HTTP/1.1 request (as you have been seeing) shouldn't return anything other than a 400 error from apache. It looks like it _may_ have been some sort of vulnerability scan focused at DLink routers. (According to some other sources) – Imo Mar 29 '10 at 18:16
  • Is there at least a way of getting these fields out of my apache error_log – Saif Bechan Mar 29 '10 at 22:35
  • You _maybe_ able to do it via mod_log :: http://httpd.apache.org/docs/2.2/mod/mod_log_config.html#customlog – Imo Mar 30 '10 at 07:02
  • My extra hint would be: configure your _default_ virtualhost next to the ones actually in use. The attempts mentioned above will end up in the logs for the _default_ virtualhost. – Koos van den Hout Aug 19 '12 at 11:35
16

Filtering IPs is not a good idea, imho. Why don't try filtering the string you know?

I mean:

iptables -I INPUT -p tcp --dport 80 -m string --to 60 --algo bm --string 'GET /w00tw00t' -j DROP
Des
  • 161
  • 1
  • 2
  • http://spamcleaner.org/en/misc/w00tw00t.html similar solution, but a bit more detailed. – Isaac Jun 10 '13 at 11:20
  • One problem with string filtering in the firewall is that it is "fairly slow". – Alexis Wilke Dec 28 '15 at 05:18
  • @AlexisWilke do you have evidence to suggest that iptables string filtering is slower than filtering at apache level? – jrwren Nov 06 '17 at 15:59
  • @jrwren Actually, it can be fairly slow if and only if you don't pass the packet offset to stop searching, i.e "--to 60" here. By default, it will search through the whole packet, the maximum limit being set at 65,535 bytes, the maximum IP packet length: https://blog.nintechnet.com/how-to-block-w00tw00t-at-isc-sans-dfind-and-other-web-vulnerability-scanners/ The manual clearly tells "If not passed, default is the packet size". – gouessej Dec 16 '18 at 10:59
  • that is a theoretical max. a more realistic max length over internet is ~1500. – jrwren Dec 19 '18 at 21:13
12

Iv also started seeing these types of messages in my log files. One way to prevent these types of attacks is to setup fail2ban( http://www.fail2ban.org/ ) and setup specific filters to black list these ip address in your iptables rules.

Heres a example of a filter that would block the ip address associated with making those messages

[Tue Aug 16 02:35:23 2011] [error] [client ] File does not exist: /var/www/skraps/w00tw00t.at.blackhats.romanian.anti-sec:) === apache w00t w00t messages jail - regex and filter === Jail

 [apache-wootwoot]
 enabled  = true
 filter   = apache-wootwoot
 action   = iptables[name=HTTP, port="80,443", protocol=tcp]
 logpath  = /var/log/apache2/error.log
 maxretry = 1
 bantime  = 864000
 findtime = 3600

Filter

 # Fail2Ban configuration file
 #
 # Author: Jackie Craig Sparks
 #
 # $Revision: 728 $
 #
 [Definition]
 #Woot woot messages
 failregex = ^\[\w{1,3} \w{1,3} \d{1,2} \d{1,2}:\d{1,2}:\d{1,2} \d{1,4}] \[error] \[client 195.140.144.30] File does not exist: \/.{1,20}\/(w00tw00t|wootwoot|WootWoot|WooTWooT).{1,250}
 ignoreregex =
  • 2
    It is true that you can block them, but there is no need to because they are just bad requests. It's better to just ignore them, saved you work and you will free up some recources. – Saif Bechan Aug 19 '11 at 17:50
  • Right @Saif Bechan, if someone worries about that "testing attacks" to be successful, he/she should better fix the own application instead in wasting time to find a way to block that. – Thomas Berger Aug 19 '11 at 23:57
  • Gave you +1, thanks for the answer. – Saif Bechan Aug 20 '11 at 00:49
  • 4
    @SaifBechan, I disagree. w00tw00t is a vulnerability scanner, and a machine that's issuing such requests can't be trusted with attempting other types of requests, so if I'm a system administrator and it takes me 2 minutes to ban such clients for days at a time, I'd do so. I wouldn't base my entire security implementation on such an approach, though. – Isaac Nov 26 '12 at 03:51
3

w00tw00t.at.blackhats.romanian.anti-sec is a hacking attempt and uses spoof IP's so lookups such as VisualRoute will report China,Poland,Denmark etc according to the IP being seconded at that time. So setting up a Deny IP or resolvable Host Name is well nigh impossible as it will change within an hour.

PRW
  • 39
  • 1
  • These vulnerability scans do not use spoofed IP addresses. If they did, the TCP 3-way handshake would not be completed and Apache would not log the request. For caveats (rogue ISP, router operators, etc), see https://security.stackexchange.com/q/37481/53422 – Anthony Geoghegan Sep 12 '18 at 17:23
2

I personally wrote a Python script to auto-add IPtables rules.

Here's a slightly abbreviated version without logging and other junk:

#!/usr/bin/python
from subprocess import *
import re
import shlex
import sys

def find_dscan():
        p1 = Popen(['tail', '-n', '5000', '/usr/local/apache/logs/error_log'], stdout=PIPE)
        p2 = Popen(['grep', 'w00t'], stdin=p1.stdout, stdout=PIPE)

        output = p2.communicate()[0].split('\n')

        ip_list = []

        for i in output:
                result = re.findall(r"\b(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\b", i)
                if len(result):
                        ip_list.append(result[0])

        return set(ip_list)

for ip in find_dscan():
        input = "iptables -A INPUT -s " + ip + " -j DROP"
        output = "iptables -A OUTPUT -d " + ip + " -j DROP"
        Popen(shlex.split(input))
        Popen(shlex.split(output))

sys.exit(0)
Xorlev
  • 1,845
  • 14
  • 12
2

I believe the reason mod_security isn't working for you is that Apache hasn't been able to parse the requests themselves, they are out-of-spec. I'm not sure you have a problem here - apache is logging weird shit that is happening out on the net, if it doesn't log it you won't be aware its even happening. The resources required to log the requests is probably minimal. I understand its frustrating that someone is filling up your logs - but it will be more frustrating if you disable logging only to find you really need it. Like someone broke into your webserver and you need the logs to show how they broke in.

One solution is to setup ErrorLogging through syslog, and then using rsyslog or syslog-ng you could specifically filter and discard these RFC violations regarding w00tw00t. Or alternatively you could filter them into a separate log file simply so your main ErrorLog is easy to read. Rsyslog is incredibly powerful and flexible in this regard.

So in httpd.conf you might do:

ErrorLog syslog:user 

then in rsyslog.conf you might have:

:msg, contains, "w00tw00t.at.ISC.SANS.DFind" /var/log/httpd/w00tw00t_attacks.log

Be aware, this approach will actually use many times more resources than originally used logging directly to a file. If your webserver is very busy, this could become a problem.

Its best-practice to have all logs sent to a remote logging server as soon as possible anyway and this will benefit you should you ever get broken into as it is much more difficult to erase the audit trail of what was done.

IPTables blocking is an idea, but you may end up with a very large iptables block list which can have performance implications in itself. Is there a pattern in the IP addresses, or is it coming from a large distributed botnet? There will need to be X% of duplicates before you will get a benefit from iptables.

hellomynameisjoel
  • 2,170
  • 1
  • 18
  • 23
  • Nice answer, I like the different approaches. Thinking about it, having custom logging will create more recourse usage, because everything has to be checked first, I guess this option falls off also. I now have logwatch enabled. This sends me an report 2 times a day with summaries of the whole systems. The apache logs get checked also and it just says w00tw00t attempts 300 times. I think i will leave the setup as it is for the time being. – Saif Bechan Mar 30 '10 at 11:53
1

You say in Update 2:

Problem that still remains The problem that still remains is as follows. These attacks are from bots that search for certain files on your server. This particular scanner searches for the file /w00tw00t.at.ISC.SANS.DFind:).

Now you can just ignore it which is most recommended. The problem remains that if you do have this file on your server somehow one day, you are in some trouble.

From my previous reply we gathered that Apache is returning an error messages due to a poorly formed HTML 1.1 query. All webservers supporting HTTP/1.1 should probably return an error when they receive this message (I've not double checked the RFC - perhaps RFC2616 tells us).

Having w00tw00t.at.ISC.SANS.DFind: on your server some where does not mystically mean "you are in some trouble"... If you create the w00tw00t.at.ISC.SANS.DFind: file in your DocumentRoot or even DefaultDocumentRoot it does not matter... the scanner is sending a broken HTTP/1.1 request and apache is saying "no, that's a bad request... good bye". The data in the w00tw00t.at.ISC.SANS.DFind: file will not be served.

Using mod_security for this case is not required unless you really want to (no point?)... in which case, you can look at patching it manually (link in other answer).

Another thing you could possibly look at using is the RBL feature in mod_security. Perhaps there is a RBL online some where that provides w00tw00t IPs (or other known malicious IPs). This would however mean that mod_security does a DNS lookup for every request.

Imo
  • 841
  • 5
  • 7
  • I don't think apache rejects them, it just throws the error but the lookup still passes. I have got the same w00tw00t.at.ISC.SANS.DFind in the access log. It does a GET. So the lookup is done and if you have the file on your machine it will get executed. I can post the access log entries but they look the same as the error log only with a GET in front of them. Apache throws the error but the request passes. That is why I asked if it would be a good idea to block these request without hostnames. But I don't want to block out normal users. – Saif Bechan Mar 31 '10 at 09:07
  • 1
    Sure, you get the same entry in the access log but look at the error code... 400. It is not processed. HTTP/1.1 (hostname) is used to tell apache which virtual host to send the request to... without the hostname part of the HTTP/1.1 request apache does not know where to send the request and returns a "400 bad request" error back to the client. – Imo Mar 31 '10 at 09:18
  • Try it yourself... create yourself a html page on your webserver and try getting to it manually using "telnet hostname 80" ... the other steps are in my first answer. I'd put a large bounty on it that you can't get the html file to display using HTTP/1.1 without the hostname. – Imo Mar 31 '10 at 09:20
  • Ah yes yes that for pointing that out to me. I always thought the access_log were entries that were passed trough the error log and actually entered your machine. Thank you for pointing this out to me and i will edit my post. I really appreciate your help. – Saif Bechan Mar 31 '10 at 09:27
  • Hi Saif, no problems, glad to have helped. Regards, Imo – Imo Mar 31 '10 at 10:45
1

How about adding a rule to modsecurity? Something like this:

   SecRule REQUEST_URI "@rx (?i)\/(php-?My-?Admin[^\/]*|mysqlmanager
   |myadmin|pma2005|pma\/scripts|w00tw00t[^\/]+)\/"
   "severity:alert,id:'0000013',deny,log,status:400,
   msg:'Unacceptable folder.',severity:'2'"
Kreker
  • 438
  • 4
  • 10
  • 22
1

I see that most of the solutions are already covered above however I would like to point out that not all client sent HTTP/1.1 request without hostname attacks are aimed directly on your server. There are many different attempts to fingerprint and/or exploit the network system preceding your server i.e. using:

client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /tmUnblock.cgi

to target the Linksys routers etc. So sometimes it helps to widen your focus and divide your defense efforts between all systems with an equal share i.e.: implement router rules, implement firewall rules (hopefully your network has one), implement server firewall/IP table rules and related services i.e. mod_security, fail2ban and so on.

Milan
  • 111
  • 2
1

how about this ?

iptables -I INPUT -p tcp --dport 80 -m string --to 70 --algo bm --string 'GET /w00tw00t.at.ISC.SANS.DFind' -j DROP
iptables -I INPUT -p tcp --dport 80 -m string --to 70 --algo bm --string 'GET /w00tw00t.at.ISC.SANS.DFind' -j LOG --log-level 4 --log-prefix Hacktool.DFind:DROP:

works fine for me.

  • i recommended the OWASP_CRS/2.2.5 or grater rule set for mod_security – Urbach-Webhosting Mar 01 '16 at 20:52
  • This is really not a good idea. You'll end up with lots of hanging connections. Plus if your site has any discussion about those requests, you can end up with false positives. – kasperd Mar 01 '16 at 21:04
0

If you use hiawatha web server as a reverse proxy these scans are automatically dropped as garbage & the client banned. It also filters XSS & CSRF exploits.

Stuart Cardall
  • 531
  • 4
  • 7