4

Currently running BIND on RHEL 5.4 and am looking for a more efficient manner of providing DNS redirection to a honeypot server for a large (30,000+) list of forbidden domains.

Our current solution for this requirement is to include a file containing a zone master declaration for each blocked domain in named.conf. Subsequently, each of these zone declarations point to the same zone file, which resolves all hosts in that domain to our honeypot servers. ...basically this allows us to capture any "phone home" attempts by malware that may infiltrate the internal systems.

The problem with this configuration is the large amount of time taken to load all 30,000+ domains as well as management of the domain list configuration file itself... if any errors creep into this file, the BIND server will fail to start, thereby making automation of the process a little frightening. So I'm looking for something more efficient and potentially less error prone.

named.conf entry:

include "blackholes.conf";

blackholes.conf entry example:

zone "bad-domain.com" IN { type master; file "/var/named/blackhole.zone"; allow-query { any; }; notify no; };

blackhole.zone entries:

$INCLUDE std.soa

@ NS ns1.ourdomain.com.
@ NS ns2.ourdomain.com.
@ NS ns3.ourdomain.com.

                       IN            A                192.168.0.99
*                      IN            A                192.168.0.99

syn-
  • 483
  • 3
  • 7
  • 10

4 Answers4

1

Haven't found a good way to eliminate having to load each domain in its own zone, but using the following rndc command eliminates the concern of causing the server to fail in the event of a malformed entry.

rndc reconfig

A full on server restart/reload will still result in a failure to start.

syn-
  • 483
  • 3
  • 7
  • 10
0

Edit : Sorry I don't read well your question. I propose the same things as you. Maybe You can include a file generated from a database ?

I have a dropDomain file with :

$TTL 3600       ; 1 hour
@               IN SOA  xxxxxxxx.fr. dnsmaster.xxxxxxxx.fr. (
                2009112001 ; serial 20yymmdd+nn
                                900        ; refresh (15 minutes)
                                600        ; retry (10 minutes)
                                86400      ; expire (1 day)
                                3600       ; minimum (1 hour)
                                )
                        NS      dns1.xxxxxxxx.fr.
                        NS      dns2.xxxxxxxx.fr.
                        MX      0       smtp.xxxxxxx.fr.

*                       A       127.0.0.1

; vim:filetype=bindzone

Then I just add the domains in my list in named.conf.local :

# Master pour les zones que l'on ne veut plus resoudre (pirates, virus, prise en main a distance...)
zone "zzzzzzz.com" { type master; file "/etc/bind/dropDomain.tld"; allow-query { any; }; };
zone "yyyyyyy.com" { type master; file "/etc/bind/dropDomain.tld"; allow-query { any; }; };
zone "ttttttt.com" { type master; file "/etc/bind/dropDomain.tld"; allow-query { any; }; };

I don't need to define it in the zone file, it is generic.

Dom
  • 6,628
  • 1
  • 19
  • 24
  • As you stated, thats the same method I'm currently employing (which oddly enough is being generated from a database backend). The "potential errors" part is dealing with data entered into the DB incorrectly... I've got sanity checks in place to take care of most of this, but what I was hoping for was a way to load "blacklisted domains" into the configuration somewhere in a manner that wouldn't halt initialization of the server if there was a bad entry that didn't comply with RFC restrictions. – syn- Apr 06 '10 at 22:06
  • 1
    I don't know the RFC restrictions, but you can prepare the whole configuration of your DNS, and pass a `named-checkzone` to see if some errors are detected. If not, then apply the configuration to the DNS server. – Dom Apr 07 '10 at 10:25
0

In theory you can avoid the slow load time by making your blackhole list part of your root hints file (e.g. via $INCLUDE) and then changing that file from being a hint to a master. That last bit is necessary to prevent your server from download the real root hints from the internet.

e.g. in named.ca:

a.root-servers.net.  IN A ....
m.root-servers.net.  IN A ....
$INCLUDE blackhole.zone

and then in blackhole.zone:

$ORIGIN example.com.
@ IN 192.168.0.99
* IN 192.168.0.99

$ORIGIN example.net.
@ IN 192.168.0.99
* IN 192.168.0.99

; ad-infinitum

There's no real need for NS records or separate zone statements for each blackholed zone - you're effectively inserting fake authoritative data into your local copy of the root zone. Just make sure you download the real root zone occasionally!

Then just go with @syn's suggestion of running named-checkzone before each reload and/or restart.

NB: I haven't tested this.

Alnitak
  • 20,901
  • 3
  • 48
  • 81
0

Have you considered an alternative to BIND? I haven't used any yet, but there are MySQL driven alternatives with web frontends such as PowerDNS with Poweradmin. This might make updates less error prone and risky. PowerDNS even has a tool to convert a BIND zone file to SQL for migration.

Also, can I ask if that list is publicly available? I'm very interested in this myself.

Aaron Copley
  • 12,345
  • 5
  • 46
  • 67
  • I like the thought of backending it all with a database, but for now I've found my solution. I'll update the post with what we're doing now. As for providing the list, unfortunately that isn't in the public domain. ;) – syn- Aug 10 '10 at 02:38
  • I was afraid of that.. :) – Aaron Copley Aug 10 '10 at 13:35