7

First, let me clear that this isn't a duplicate of Does deliberately wrong information from a DNS server violate standards generally accepted good practices? thread, as I'm not interested in legal aspects or DNS standards, nor ISP best practices. That is not my concern at this moment, and while mentioned post is somewhat relevant in terms of discussing same techniques, it doesn't provide much insight into what I'd like to ask you. Which brings me to my question:

What I would like to know is your opinion on blocking web server requests (and thus delivery of requested contents) to those clients whose forward DNS look-up table does not contain the request originating IP. Or, to put it differently, clients that fail a forward-confirmed reverse DNS check.

The way I see it, there are three different categories of clients failing forward DNS checks:

  • Non-existing DNS PTR. These are clients that would resolve to an IP of 0.0.0.0 with my Socks library when doing a forward DNS look-up and would include most IPv6 clients tunneling through IPv4 brokers where any intermediary IPv4 PTR would be pointless, and those IPv4 clients that have misconfigured DNS records (intentionally or otherwise). My library is capable of distinguishing one from another and I'm not looking to block out IPv6 clients sending requests through an IPv4 broker. I'm not quite sure about the latter though and would appreciate your thoughts on them.

  • A record pointing to AAA record, or vice versa. These are clients that have possibly intentionally misconfigured DNS records, be it for security reasons, VPN or Proxy setups, but resolve to the same ASN. For example, an IP of 1.2.3.4 has a name crawler.someserver.xxx but this name resolves with a forward DNS look-up to IP 1.2.3.5 that is named (r-DNS) someserver.xxx and both IPs are part of same ASN. While a bit costly to check against so many DNS records with each request, I'm able to cache results and also query IANA registered ASN ranges via a local and regularly updated database. Such clients are, as far as I'm concerned, acceptable but will have their requests more often looked into just in case.

  • Spoofed DNS records. These clients are my primary concern and the main reason for my question. They would include any IP addresses that completely fail a forward DNS check and have obviously spoofed DNS records where PTR returns an out of ASN range IP and r-DNS look-ups don't even remotely resemble one another (or wouldn't even have assigned names, which is also often the case). Honestly, I have no sympathy for these types and would like to know if you think there's any reason I should reconsider blocking them out as soon as they appear.

I'm currently running services whose policies do not include any such client testing, but have enforced certain blocks manually to a handful of such clients so far. All of these clients (that failed FCrDNS test) are blacklisted due to other violations of my TOS as well, and I have yet to see a legitimate use of my services by any clients that do fail the FCrDNS test and spoof their DNS records.

For example, one of such blocks I'm enforcing is on an IP 93.174.93.52 that resolves to nlnd02.xsltel.com. Forward look-up table only contains an IP of 76.72.171.131 that doesn't resolve using r-DNS. This IP of 93.174.93.52 was listed in my blacklist for HTTP Proxy probing and is currently also listed in some other honeypots (AHBL and UCEPROTECT) but the IP 76.72.171.131 that it resolves to using FCrDNS test isn't listed anywhere, except that some honeypots report DNS problems, as they should. From that I gather that this client intentionally masks DNS records to avoid detection by certain honeypots.

I guess my question is, are there any legit uses for masking DNS records, or should I update my web-application services to automatically block any such incoming traffic (in terms of content delivery) and change TOS accordingly. Technical implementation of such testing is not a problem, nor are there any other legal concerns, at least not that I'm aware of. If you can think of any that could apply, don't hesitate to mention them, though.

These FCrDNS checks would be implemented at the web-application level. They're not supposed to be the only security measure, so think of them as an addendum to lower level checks, mostly in a bid to stop content scrappers, forum spammers, and similar network feces dead in their tracks.

Additionally, if you have any TOS examples where such filters and the reasoning behind them would be properly described, the better.

Thanks in advance for all your contributions!

TildalWave
  • 10,801
  • 11
  • 45
  • 84

2 Answers2

8

Blocking IP with no reverse DNS means punishing people who have bad ISP. It seems that most ISP have now understood that reverse DNS should be in place, but occasional mishaps still happen. There is no, to my knowledge, "legitimate" reason not to implement reverse DNS, but I have seen it happen a lot, and rejecting requests on that ground seems harsh, and also unlikely to have to appropriate pedagogical effect.

Also, note that for IPv6, with the automatically allocated addresses (computed from the machine MAC address), a missing reverse DNS is likely to be the norm.

Tom Leek
  • 168,808
  • 28
  • 337
  • 475
  • Good points, thanks for your answer. I should probably mention these blocks would be in place at the web-application level (although I did add `web-application` tag) and this _'appropriate pedagogical effect'_ that you mention can probably be handled with a proper response. That's why I'm also interested in any relevant TOS, to help me include that in my web-application response. As for IPv6, I need to investigate that a bit more, cheers for pointing it out. I do check for mobile users and I can omit FCrDNS checks for those, and shave response time along with it. – TildalWave Jan 30 '13 at 23:50
  • For IPv6 clients tunneling through IPv4, could I possibly whitelist all common and acceptable IPv6 Tunnel Broker end points, or would there be too many of those and I wouldn't be able to compile an effective whitelist? What's your take on this? Cheers! – TildalWave Jan 30 '13 at 23:54
  • 2
    I believe rejecting IP because of a failed reverse-DNS is not worth the effort: security benefits are slight, but the runtime cost (time to make a reverse-DNS check) and the administration complexity overhead are high. Complexity is a problem in itself when you want to achieve security. – Tom Leek Jan 31 '13 at 13:59
  • runtime cost isn't an issue as I'm caching results and it's mostly a one off. I'm running analysis in the background and the average first look-up costs ~ 97ms (with almost exactly 1s maximum, but that was one single fluke). Any subsequent requests are checked against the cache first, which costs ~ 6-8ms tops and I've set it to expire in 30 days, but keeping IP history. Mobile IPv6 users tunneling through IPv4 would cost me a lot more time though (until the look-up times-out), and I'm skipping this step for those. I'm fine with both runtime cost and complexity so far. ;) – TildalWave Feb 01 '13 at 02:59
3

FCrDNS should only be required when the end user needs to properly identify itself with a domain or organization, commonly used for email where the from address header should match the PTR of the sending IP (IPv4 or IPv6 Address).

Actually Tom, if you notice Google requires IPv6 initiated email connections to have a proper RDNS, so that's out. The fact is the only reason companies cannot require RDNS is the plethora of established, misconfigured systems in production. The same thing with the -all spf mechanism, it's the excess of misconfigured systems.

The sending IP must have a PTR record (i.e., a reverse DNS of the sending IP) and it should match the IP obtained via the forward DNS resolution of the hostname specified in the PTR record. Otherwise, mail will be marked as spam or possibly rejected. -Source Google

We've attempted to use blacklists like UCEBL and RFC-ignorant, and we end up with less spam, pissed off end users, and reluctant 3rd parties.

Jacob Evans
  • 171
  • 7