73

We have an SMTP only mail server behind a firewall which will have a public A record of mail.. The only way to access this mail server is from another server behind the same firewall. We do not run our own private DNS server.

Is it a good idea to use the private IP address as an A record in a public DNS server - or is it best to keep these server records in each servers local hosts file?

Geoff Dalgas
  • 2,416
  • 5
  • 31
  • 32

11 Answers11

80

Some people will say no public DNS records should ever disclose private IP addresses....with the thinking being that you are giving potential attackers a leg up on some information that might be required to exploit private systems.

Personally, I think that obfuscation is a poor form of security, especially when we are talking about IP addresses because in general they are easy to guess anyway, so I don't see this as a realistic security compromise.

The bigger consideration here is making sure your public users don't pickup this DNS record as part of the normal public services of your hosted application. ie: External DNS lookups somehow start resolving to an address they can't get to.

Aside from that, I see no fundamental reason why putting private address A records into the public space is a problem....especially when you have no alternate DNS server to host them on.

If you do decide to put this record into the public DNS space, you might consider creating a separate zone on the same server to hold all the "private" records. This will make it clearer that they are intended to be private....however for just one A record, I probably wouldn't bother.

Tall Jeff
  • 1,583
  • 12
  • 11
  • +1, see comment to womble's answer for reason :) – Mihai Limbăşan May 05 '09 at 16:20
  • 2
    This is a good example of an issue with this idea: http://www.merit.edu/mail.archives/nanog/2006-09/msg00364.html – sucuri Aug 04 '09 at 19:12
  • Does this advice still apply if you have sensitive servers with public IP addresses, but behind a firewall restricting access? If the public DNS for the public IP addresses gives a roadmap of the infrastructure, isn't that some use to an attacker? Host identification? – Kenny Nov 28 '11 at 12:51
  • @Kenny Yes, in theory this does have some use, but it is information that is not hard to get because the range of public IP addresses are readily discoverable anyway. I kind of addressed this in the answer and adding to that notion I would argue that if you are depending on hiding IP addresses or hostnames as any kind of material line of defense, you have much much bigger problems already. – Tall Jeff Nov 28 '11 at 14:12
  • @Tall Jeff I don't suggest hiding the fact that the ip addresses exist, I understand these exist and can't be "hidden" - but thought it may help not to give out exactly what each system is in the public DNS record as this might slow down an attacker - make them work harder to identify candidate hosts? Certainly wouldn't be depending on this for security at all, but when the choice exists in addition to other security methods, was wondering if it was worth taking. When an attacker can NSLOOKUP to find out what role each machine plays, surely that's not good? – Kenny Nov 28 '11 at 17:31
  • 1
    @Kenny To your point, it's certainly desirable to minimize the amount of information that is publicly discoverable and you would not want to disclose something that you didn't need to or didn't at least have some kind of good cost/benefit trade-off involved to consider it. No argument there. Aside from that, the core of my point (and I think we agree) was simply that that obfuscation is a poor form of security and that there is no absolute good/bad, but only a continuum of cost/benefit trade-offs to be considered on a case-by-case basis depending on your risk tolerance, etc. – Tall Jeff Nov 28 '11 at 18:17
  • This is an old question, but I came here looking for guidance. The only issue I see with putting a non routable address on a public DNS server is if the browser looks up the address which happens to resolve to an internal server on their network which happens to not use the hostname (very common) and therefore renders a page it would be rather confusing and even make it look like their system had been hacked in some way. Ultimately that's their problem. but its still a cause for confusion. – DeveloperChris Jun 19 '18 at 03:21
  • @sucuri, your link is broken, and archive.org doesn't have it. Do you know of anywhere I can find this discussion today? – Daniel Serodio Aug 21 '18 at 14:53
  • Working link for that provided by @sucuri above: https://archive.nanog.org/mailinglist/mailarchives/old_archive/2006-09/msg00364.html – Andrew Marshall Dec 25 '20 at 06:35
41

I had a lengthy discussion on this topic on the NANOG list a while ago. Though I'd always thought it was a bad idea, turns out that it's not such a bad idea in practice. The difficulties mostly come from rDNS lookups (which for private addresses Just Don't Work in the outside world), and when you're providing access to the addresses over a VPN or similar it's important to ensure that the VPN clients are properly protected from "leaking" traffic when the VPN is down.

I say go for it. If an attacker can get anything meaningful from being able to resolve names to internal addresses, you've got bigger security problems.

womble
  • 95,029
  • 29
  • 173
  • 228
  • 1
    +1, thank you for being a voice of sanity in all the FUD responses to this question. "Security risk" my lower dorsal regions, and seeing routing problems and DNS issues colluded into one knee-jerk "don't do it" reaction just makes me wonder about the competence of people running networks all over the place. – Mihai Limbăşan May 05 '09 at 16:19
  • 1
    Correction: Make that "seeing routing problems and DNS issues *and* authentication/identity issues colluded". – Mihai Limbăşan May 05 '09 at 16:32
7

In general introducing RFC1918 addresses into public DNS will cause confusion, if not a real problem, at some point in the future. Use IPs, host records, or a private DNS view of your zone to use the RFC1918 addresses behind your firewall but not include them in the public view.

To clarify my response based on the other submitted response, I think introducing RFC1918 addresses into public DNS is a faux pas, not a security issue. If someone calls me to trouble shoot an issue and I stumble across RFC1918 addresses in their DNS, I start talking really slowly and asking them if they've rebooted recently. Maybe that's snobbery on my part, I don't know. But like I said, it's not a necessary thing to do and it's likely to cause confusion and miscommunication (human, not computer) at some point. Why risk it?

jj33
  • 11,038
  • 1
  • 36
  • 50
  • 2
    What real problem(s) are these? In what ways will people be confused? – womble May 05 '09 at 02:30
  • 2
    So it's... polite... not to put 1918 addresses into public DNS? I've hit plenty of problems that "hidden" and "split horizon" DNS zones have caused, but not nearly so many caused by private IP in public DNS. I just don't see the problem. – womble May 05 '09 at 02:36
  • Then we disagree. I'm quite happy with the public/private split-view zones I've created. It appears that we have both stated opinions. If you have NANOG on your side, you probably have consensus. – jj33 May 05 '09 at 02:43
  • 3
    @womble, computers might be confused if for some reason they attempt to connect to that host outside your network and instead of getting the SMTP server they expected they got whatever was living at that IP address on the lan they where currently connected to. It could even be that one of your staff using a laptop on a remote might start spewing the user name and password out in plain-text on someone else's network just because they also happen to have a 192.168.1.1 – Zoredache May 05 '09 at 02:53
  • 19
    The problem I have with your answer is that you allude to problems, but don't provide any details. If there are reasons not to do it, I want to know about them, so I can make a fully reasoned decision on the subject. – womble May 05 '09 at 02:53
  • 1
    @Zoredache: Why is someone resolving a name they don't have access to? DNS isn't the only place you could get private addresses, anyway -- HTML can use IP literals, for instance... – womble May 05 '09 at 02:56
  • 1
    I would also like to know what "confusion" can arise. The only "confusion" I can think of is RFC1918 addresses in public NS or MX records, and that is a big fat error, not confusion. The security issue is a red herring, 90% of people will already have 192.168.1.0/24 and nobody will really bother to check DNS for more, and if you're bothered about leaking internal networks, have you checked your SMTP headers lately? Thought so. – Mihai Limbăşan May 05 '09 at 05:25
  • 2
    Having RFC1918 addresses in the public DNS is for example superbly useful if you push routes to the internal networks through VPNs - that allows people to use their preferred DNS server and *still* resolve your internal names correctly. – Mihai Limbăşan May 05 '09 at 05:25
  • In most environments, putting private address space in public DNS isn't a big deal. It becomes a problem in large companies and governments where divisions, agencies, or operating companies have a untrusted intranet between business units. In that case, the meaning of "internal" isn't clear-cut. – duffbeer703 May 09 '09 at 21:02
  • Views is, for me, the best idea. 1 view contains only the public records while the second one contains also the private records. The second view is served only on the private (local or vpn) networks. – Benoit Jun 28 '09 at 19:04
3

Though the possibility is remote I think you may be setting yourself up for some of MITM attack.

My concern would be this. Lets say one of your users with a mail client configured to point at that mail server takes their laptop to some other network. What happens if that other network also happens to have the same RFC1918 in use. That laptop may attempt to login to the smtp server and offer the user's credentials to a server that shouldn't have it. This would be particularly true since you said SMTP and didn't mention that you where requiring SSL.

Zoredache
  • 128,755
  • 40
  • 271
  • 413
  • If the user has a laptop they use in your office as well as elsewhere, chances are they'll have configured their hosts file to point at the internal IP of the MTA, or used the IP directly in their MUA config. Same end result. Bring on IPv6 and the death of RFC1918, it's the only way to be sure... – womble May 05 '09 at 03:00
  • Excellent point Zoredache. This is an interesting attack vector. Depending on the MUA it might present the usual "something annoying happened, please click me to do what you wanted me to do in the first place" dialog box, or it could fail outright if the ssl cert doesn't match. – Dave Cheney May 05 '09 at 16:33
  • Is this attack scenario effectively eliminated if all servers (namely web/HTTPS, IMAP, and SMTP) in the valid network require SSL/TLS-based client connections? – Johnny Utahh Sep 23 '19 at 10:53
  • @JohnnyUtahh, well you need all servers to support TLS, with valid certs and you need all clients to be configured to verify the certs, and never try a non-TLS connection. Which is a more common default now, then 10 years ago. But there is still old/stupid software that might try non-tls connections. – Zoredache Sep 23 '19 at 21:48
  • Yep, all makes sense, thanks @Zoredache. – Johnny Utahh Sep 23 '19 at 23:42
3

Your two options are /etc/hosts and putting a private IP address in your public zone. I'd recommend the former. If this represents a large number of hosts, you should consider running your own resolver internally, it's not that hard.

Dave Cheney
  • 18,307
  • 7
  • 48
  • 56
  • 1
    That's certainly an option, but why? What does running an internal resolver or (much smarter) using something like BIND views gain you beside administrative overhead and maintenance burden? That's what I don't understand. – Mihai Limbăşan May 05 '09 at 16:10
  • 1
    Running your own name server is not rocket science. If your network is of a sufficient size that you consider using /etc/hosts as a hack or to time consuming, then you need to setup a pair of resolvers in your network. As a side benefit you reduce the amount of dns traffic leaving your network and you speed up resolution of common queries. – Dave Cheney May 05 '09 at 16:17
  • 3
    I know it's not rocket science, but it's a maintenance overhead and a potential security risk. Certainly a higher security risk than leaking the existence of a RFC1918 network. DNS traffic is utterly negligible - I host in excess of 80 moderately large and busy zone files on my DNS at work and weekly DNS traffic is less than 2 minutes of Youtube. Speeding up query resolution is actually the first halfway sane argument against RFC1918 numbers in DNS I've seen here :) Upvoted for actually thinking a bit beyond the usual knee-jerk "oh, noes, it's a security risk" reaction :) – Mihai Limbăşan May 05 '09 at 16:23
  • @Mahli, if it was my network, I would run a pair of resolvers. The maintenance overhead for me is less than the value they bring. In addition to doing split horizon dns tricks, you get the ability to use CNAMEs on your network so you can create useful aliases for moving services around between machines without chasing config. I also find DNS an excellent canonical reference of where hosts live, because its not possible to move the host (or service, if you follow my CNAME advice) without updating your internal DNS. – Dave Cheney May 05 '09 at 16:26
  • @Mihai - almost all ISPs have access to AS112 anycast for fast resolution of RFC1918 space. More likely (per my answer) are timeouts as the SMTP layer tries to reach an IP address that's either not in the routing table, or in the routing table but firewalled. – Alnitak May 05 '09 at 16:28
  • 1
    @Alnitak: I understand where you're coming from but that's still not a DNS problem, and I maintain that trying to fix issues originating somewhere else through DNS is not a good idea at all. Problems should be fixed at the source, not patched up by DNS hacks - hacks make networks brittle. – Mihai Limbăşan May 05 '09 at 16:34
  • 2
    well, yes, I agree. And putting your private host's information in the public DNS is a hack solution for the problem of not having an internal DNS server... :) The problem is that the higher layers don't _know_ that this information is supposed to be "private". – Alnitak May 05 '09 at 16:44
  • *nods* My take is that this information is should not be regarded as private because the distinction between public and private should not be made at the name resolution level. Guess we'll have to agree about disagreeing :) – Mihai Limbăşan May 05 '09 at 17:40
3

No, don't put your private IP addresses in the public DNS.

Firstly, it leaks information, although that's a relatively minor problem.

The worse problem if your MX records point to that particular host entry is that anyone that does try to send mail to it will at best get mail delivery timeouts.

Depending on the sender's mail software they may get bounces.

Even worse, if you're using RFC1918 address space (which you should, inside your network) and the sender is too, there's every chance that they'll try and deliver the mail to their own network instead.

For example:

  • network has internal mail server, but no split DNS
  • admin therefore puts both public and internal IP addresses in the DNS
  • and MX records point to both:

 $ORIGIN example.com
 @        IN   MX    mail.example.com
 mail     IN   A     192.168.1.2
          IN   A     some_public_IP

  • someone seeing this might try to connect to 192.168.1.2
  • best case, it bounces, because they've got no route
  • but if they've also got a server using 192.168.1.2, the mail will go to the wrong place

Yes, it's a broken configuration, but I've seen this (and worse) happen.

No, it's not DNS's fault, it's just doing what it's told to.

Alnitak
  • 20,901
  • 3
  • 48
  • 81
  • 2
    How is delivering mail to the wrong machine a DNS problem? You should authenticate the SMTP server. That's a SMTP configuration problem which has absolutely nothing to do with DNS. You're not even comparing apples to oranges here, you're comparing a radioactive buttered toast to five milligrams of Lagrangian derivatives on a stick. If you're worrying about getting the wrong MX or A result you should use DNSSEC instead of holding DNS responsible for what it's not responsible, and if you're mistakenly delivering SMTP to the wrong RFC1918 number you've misconfigured or misdesigned your network. – Mihai Limbăşan May 05 '09 at 16:27
  • (reposted commend for clarification) – Mihai Limbăşan May 05 '09 at 16:28
  • If someone on your network "made up" an IP number then the IP protocol is functioning exactly as designed, i.e. without security in mind. What you are asking is "how can I trust that I'm actually talking to whomever I'm supposed to talk to?" and the answer to that cannot be delivered by IP and/or by DNS, the answer to that is delivered by DNSSEC and/or SSL/TLS and/or an application layer mechanism. – Mihai Limbăşan May 05 '09 at 16:30
  • Just read your comment to Dave's post - your post makes more sense now :) I still disagree with the premise, but I don't think it's irrational anymore... – Mihai Limbăşan May 05 '09 at 16:35
  • @Mihai - firstly, cool it with the attitude! You're now conflating security issues with simple good network manners. Yes, I believe you shouldn't put RFC1918 addresses in the public DNS, but not for security reasons, it's simply not "nice". I also didn't say *anything* about proving that the far end is who they say they are. Believe me, I know _plenty_ about DNSSEC. – Alnitak May 05 '09 at 16:35
  • (Offtopic comment) "Cool it with the attitude"? Um, how about "cool it with the condescence" and we counter each other with arguments instead of personal issues, shall we. – Mihai Limbăşan May 05 '09 at 16:38
  • (On-topic comment) I interpreted *"Even worse, if you're using RFC1918 address space (which you should, inside your network) and the sender is too, there's every chance that they'll try and deliver the mail to their own network instead."* as referring to authentication, hence the diatribe - if I was wrong I retract my argument. – Mihai Limbăşan May 05 '09 at 16:40
  • 2
    no, it wasn't about authentication at all, just about connections ending up in the wrong place. I saw _plenty_ of that when Verisign decided to wildcard *.com back in ~2001. – Alnitak May 05 '09 at 16:46
  • Hah, the Verisign wildcard was exactly what popped into my head when reading that paragraph. Guess we're seeing the same issue from two different directions - my reaction to it back then wasn't "why are people upset about their misaddressed mail ending up at Verisign?" (i.e. DNS causing misrouting), it was "Why are people delivering their misaddressed mail to Verisign?" (i.e. DNS being used for authentication.) Granted, DNSSEC *still* hasn't picked up enough momentum... – Mihai Limbăşan May 05 '09 at 16:54
  • It isn't so much a security issue as it is one of identity. If everyone decided to expose their 10./8 network, can you imagine the ensuing anarchy as different machines claimed to all be 10.1.1.1? "I'm Spartacus!" "No, I am!" "Ignore him, I'm Spartacus!" "No, me!" "Me too!" (group looks and speaks in unison) "Who are you?!?" – Avery Payne May 05 '09 at 16:55
  • Why were people delivering their misaddressed e-mail to Verisign - because the DNS told them to! Sure, misaddress e-mail was always a problem, but nobody ever expected a major gTLD to do something so stupid! :) FWIW, I'm doing my bit towards DNSSEC - do a Google search for 'SAC035'. – Alnitak May 05 '09 at 17:02
  • @Avery: Yes, that's what I was referring to (and why I became so *ahem* passionate about it), but ultimately it boils down to trust in the DNS server, which is a major risk in the first place, hence DNSSEC. It does not matter one iota even if everyone were to advertise PTRs to 10./8, all that matters is which server is authoritative as far as the client's trusted DNS server is concerned. That trust relationship is the problem, that's what makes it a security issue, and this problem can only be fixed by something like DNSSEC, which currently is a chicken-and-egg problem like IPv6 :( – Mihai Limbăşan May 05 '09 at 17:10
  • @Mihai Limbasan - ok, now I understand your point of view. True, the DNS side of things is indeed an issue ATM, but even without it, with just straight IP routing, it's still trouble. The two of them together would probably bring all kinds of untold horror unto the internet in general, so we're agreeing to the same thing, just on two different sides of the same coin. :) – Avery Payne May 05 '09 at 17:14
  • @Alnitak: Nice :) I envy you actually being able to spell out results. The extent to which I'm contributing is currently fighting the windmills^W^W^W lobbying/petitioning the decentralized government structures in my country to at least *consider the idea*. The results so far are a looks of bewilderment or resounding silences on the receiving end, and blinding headaches and homicidal thoughts on this end, but that's par for the course when dealing with bureaucratic inertia, I guess, so I try to stay positive and look forward to the day when .ro. is signed... – Mihai Limbăşan May 05 '09 at 17:15
  • Incidentally - FWIW, I got a bit carried away there, and apologize. Guess Jeff was right and SO/SF aren't that well suited to forum-type back-and-forth discussions... Perhaps we can commiserate about the current progress in networking technology deployment (or debatable lack thereof) somewhere else :) – Mihai Limbăşan May 05 '09 at 17:21
  • @Alnitak: +1 for "No, it's not DNS's fault, it's just doing what it's told to." – Mihai Limbăşan May 05 '09 at 17:25
  • @MihaiLimbăşan I know it's been a decade, but curious where you land on this issue now. Also, I'm only a student, but studying RFC1918, Section 5 reads "If an enterprise uses the private address space, or a mix of private and public address spaces, then DNS clients outside of the enterprise should not see addresses in the private address space used by the enterprise, since these addresses would be ambiguous." Am I misreading this section? – Amir Soofi Jul 24 '19 at 06:21
3

There may be subtle problems with it. One is that common solutions to DNS Rebind attacks filter local DNS entries resolved from public DNS servers. So you either open yourself to rebind attacks, or local addresses don't work, or require more sophisticated configuration (if your software/router even allows it).

  • +1 DNS rebinding is bad!! https://medium.com/@brannondorsey/attacking-private-networks-from-the-internet-with-dns-rebinding-ea7098a2d325 – Ohad Schneider Jun 22 '18 at 20:36
1

If by private you mean a 10.0.0.0/8, a 192.168.0.0/16, or a 172.16.0.0/12, then don't. Most internet routers recognize it for what it is - a private address that must never be routed to the public internet in a direct fashion, which is what helped the popularity of NAT. Anyone attempting to contact your public DNS server will retrieve the private IP address from DNS, only to send a packet to .... nowhere. As their connection attempts to traverse the internet to your private address, some (sanely configured) router along the way will simply eat the packet alive.

If you want to get email from the "outside" to come "inside", at some point, the packet has to cross your firewall. I would suggest setting up a DMZ address to handle this - a single public IP address that is tightly controlled by any router/firewall you have in place. The existing setup you describe sounds like it does exactly that.

EDIT: clarification of intent... (see comments below). If this doesn't make sense, I'll vote to remove my own post.

Avery Payne
  • 14,326
  • 1
  • 48
  • 87
  • 3
    That's all nice and true, but you haven't given an actual reason for why one should not publish RFC1918 addresses in DNS. You have just described what RFC1918 addresses are and that it's possible to not have a route to some of them. How is that different from any other IP number? It's possible to not have a route to 198.41.0.4 - does that mean it's wrong to publish 198.41.0.4 in DNS? DNS is a *name resolution system*. It has nothing to do with routing, the two are orthogonal. You're colluding two categories of problems, which basically amounts to FUD. – Mihai Limbăşan May 05 '09 at 16:16
  • 1
    The context of the discussion was the use of private IP addresses in a *public* DNS server. The point of the post was to indicate that, by default, routers are not to route private IP addresses. I was not attempting to indicate that you *can't* use private IP addresses in a DNS server, only that you shouldn't provide those IP addresses "to the outside". If this is not clear enough, I'll gladly withdraw the post. Otherwise, I disagree, the post is 100% spot-on - the net effect for this person is that /they will have problems/ if they do this. – Avery Payne May 05 '09 at 16:43
  • *nods* Your comment under Alnitak's post cleared it up :) Thanks. – Mihai Limbăşan May 05 '09 at 17:19
  • 1
    *"Anyone attempting to contact your public DNS server will retrieve the private IP address from DNS, only to send a packet to .... nowhere"* - nope, you have actually just described DNS rebinding and it works on some of the most secure routers out there, including my PepWave Surf SOHO: http://rebind.network/rebind/ – Ohad Schneider Jun 22 '18 at 20:45
1

It's best to keep it in the hosts file. If only one machine is ever supposed to connect to it anyway, what do you gain by putting it into public DNS?

sh-beta
  • 6,756
  • 7
  • 46
  • 65
  • Working in the cloud you could have thousands of private machines. A few years back, Netflix said they had 2,000+ Cassandra nodes. That's not practical to use the `/etc/hosts` file because all 2,000 machines then need to manage these IP/name pairs... – Alexis Wilke Mar 15 '19 at 23:36
0

i consider it inconvenient to change a host file on a large qty of hosts, but not a genuine issue. i would consider it a real issue that a critical layer 3 service can fail and latch itself into an unrecoverable scenario because, there was a cyclic dns dependency. hosts files have their place, particulary in layer 3 network operation where we might not be able to assume a dns service works yet.

i am searching for genuine rationale to prohibit by policy the use of reserved private ip network segment addresses.

i see no technical issue with using public dns names to resolve ip address for internal-to-internal use. often in this rare scenario, it's maybe equivalent or less work to use a split dns zone. (using a hosts fine is logically equivalent to a separate dnz zone imo). so i think if you are considering public dns private address, consider if you have a greater topology issue and make sure you're changes work towards resolving that topology.

i do see a vanity problem, that being, there will be countless other networks where the public name in question will accidentally (to the advantage of an attacker) route to resources an attacker controls. the vanity problem being, my dns name can be shown a user while the communication goes to a server i don't control. in this circumstance, protocol dependant things can happen. in http land, the use of hsts and keeping a tls private key secret should provide sufficient vanity, but until browsers decide to consider all webpages served from private networks to be "insecure", there will be the vanity question in http. other protocols, particularly where there is no proof of authenticity like public trust tls (like https), private trust tls (like ssh), or mutual tls (like openvpn), use of public dns that resolves to private dns may bear vanity issues.

some hardware vendors intentionally operate addresses like this, or at least i thought so "routerlogin dot net" may have previously but doesn't now, at least not from where I'm located, or the manufacturer might park that as a black hole address and rely on mitm for either your routing or dns (eg split dns)resolution to implement a user friendly router setup page.

I've had a security firm complain about private dns records for being in spam db's, which kind of makes sense if the spam list is for lazy email operators, but we should also universally agree to not deliver or accept email from a private ip address, especially without proof of authenticity and authentication-- which makes that complaint not make sense to me in the vanity sense.

when i say split dns, I'm specifically referring to operating 2 or more dns zones that claim to be the authority for a name but serve different addresses, eg. a public and private zone for example dot com that resolve as a public or private address respectively.

i think there are technical problems with the use of split zone dns (aka private dns). particularly when any os, or later 3 technology (like a vpn) vendor is involved because at the end of the day, the os's network stack programmer or later 3 operator has more control over your bare dns resolution than you do. in particular, some operating systems use multi cast dns where the fastest dns response is considered authoritative, and i bet you, your private dns server over a vpn will be slower than the isp'a dns cache or resolver, it's simply closer to the end user in hops. this means that is you have a dns name resolve to a public or a private ip address -by circumstance of what network- -in theory- it's supposed to be resolving dns via-- i think you're likely to be a painful situation.

because of this, i prefer to never have one dns record have more than one authoritative answer no matter what it's network topology is.

for me, that means, end users (read as "developers i support") tend to have to deliberately choose if they want to resolve a private address or a public address by choosing between two distinct dns names to use. for their services/applications, but also for their webapps when and where their webapps may be accessible over both private and public networks because of a vpn.

i dislike selectively choosing the routing via dns. ip is meant to be resilient to network failures and this won't be. what would be? to be more "ip" like, would be: to have a canonical ip address (which to be canonical cannot be a private network address), and inject a (/32) route for these addresses that sends packes to a router i control that will repeat this process until the communication reaches my host over a private network such an a vpn+lan without having routed over a public network unless the vpn was off. --for people whom consider managing a few hosts file too burdensome, i suspect managing a route table like this would be much farther from ease. unless you happen to be on ipv6 or are one of the old guard who own class A or class B sized public ip block.

side note, i totally disagree with the use of nat as a defacto "part of the defensive network topology". i e this used where normally there is a handful of public address and a lot of port address translation (especially in docker heavy systems). accidentally configuring a firewall to be "default reject" via of an implementation detail of the isp that accidentally means your isp happens to not currently route any traffic destine to the lan of the firewall-- i think everyone would agree that is a very poor defense strategy, however it is the default one i see everywhere. it's so pervasive that as part of the ipv6 adoption curve, network operators have been confused and demanded technical support for NAT on ipv6 where there is almost zero region other than cargo culting to imminent it, and if you do imminent it, there is a good chance you literally don't have enough ram to build the network mask tables that NAT requires to function, meaning ipv6 nat incurs some strange dynamic characteristics that, i think 20 years ago, would have been seen as a completely irresponsible and risky kind of software to put on a firewall where every assertion needs to be correct the first time and hardened against abuse.

in general, i think the assertion that all public dns records that you own should resolve to hosts you control is a good baseline. and also made perfect sense when you could realistically own more ip addresses than you had cause to need. even now if you have to use private dns i encourage you to host split public dns with a known safe black hole address to resolve such that no network change by friendly or other operators can substantially change your deployment.

i think the fear to disclose private networks addresses and segments is silly any baseless. any attacker already knows which private networks segments you own and operate, the list is small enough to be memorized. it would be more surprising to find out that an operator doesn't use a private network, akin to finding out an operator doesn't use tcp/ip or doesn't use osi.

i don't see any issue with obscure dns names that are publicly resolvable to private network addresses. obscure dns names that, have no chance of brute force discovery, have no chance that a user could devine to intentionally type manually into an address bar. virtually ever dns zone is operated with zone transfers disabled, meaning they are opaque unless you happen to know the exact dns name before resolving. (with zone transfer enabled, anyone can list all your dns records rather than guessing and check)

is because there is a shortage of public ipv4 IP addresses and there are economic factors that force me to occasionally work with less than i need. prolific adoption and use of ipv6 would eliminate by needs for ever using private dns, but ipv6 is very often not deployed because there is cost in the hardware, software, and operational staffing.

final note, sometimes I'm forced to use dns names by middleware, like certain cloud provider's load balancers only work of you can forever use public dns resolution to a resolver controlled by the cloud provider, even when using private address space.

i don't have a dictatorial answer here, just some ideas that i didn't see else where in this thread. "certainly don't if the audience is people, people will make mistakes", "probably don't if you have any other realistic options", "if you have to, it's fine , be careful, and be informed, like anything we do online".

ThorSummoner
  • 321
  • 4
  • 13
0

I arrived here as I was looking for similar information and was surprised that many say it's fine to leak your private IP addresses. I guess in terms of being hacked, it doesn't make a huge difference if you are on a safe network. However, DigitalOcean has had all local network traffic on the exact same cables with everyone really having access to everyone else traffic (probably doable with a Man in the Middle attack.) If you just would get a computer in the same data center, having that information certainly gives you one step closer to hacking my traffic. (Now each client has its own reserved private network like with other cloud services such as AWS.)

That being said, with your own BIND9 service, you could easily define your public and private IPs. This is done using the view feature, which includes a conditional. This allows you to query one DNS and get an answer about internal IPs only if you are asking from your one of your own internal IP address.

The setup requires two zones. The selection uses the match-clients. Here is an example of setup from Two-in-one DNS server with BIND9:

acl slaves {
    195.234.42.0/24;    // XName
    193.218.105.144/28; // XName
    193.24.212.232/29;  // XName
};

acl internals {
    127.0.0.0/8;
    10.0.0.0/24;
};

view "internal" {
    match-clients { internals; };
    recursion yes;
    zone "example.com" {
        type master;
        file "/etc/bind/internals/db.example.com";
    };
};
view "external" {
    match-clients { any; };
    recursion no;
    zone "example.com" {
        type master;
        file "/etc/bind/externals/db.example.com";
        allow-transfer { slaves; };
    };
};

Here is the external zone and we can see IPs are not private

; example.com
$TTL    604800
@       IN      SOA     ns1.example.com. root.example.com. (
                     2006020201 ; Serial
                         604800 ; Refresh
                          86400 ; Retry
                        2419200 ; Expire
                         604800); Negative Cache TTL
;
@       IN      NS      ns1
        IN      MX      10 mail
        IN      A       192.0.2.1
ns1     IN      A       192.0.2.1
mail    IN      A       192.0.2.128 ; We have our mail server somewhere else.
www     IN      A       192.0.2.1
client1 IN      A       192.0.2.201 ; We connect to client1 very often.

As for the internal zone, we first include the external zone, which is how it works. i.e. if you are an internal computer, you only access the internal zone so you still need the external zone definitions, hence the $include command:

$include "/etc/bind/external/db.example.com"
@       IN      A       10.0.0.1
boss    IN      A       10.0.0.100
printer IN      A       10.0.0.101
scrtry  IN      A       10.0.0.102
sip01   IN      A       10.0.0.201
lab     IN      A       10.0.0.103

Finally, you have to make sure that all your computers now make use of that DNS and its slaves. Assuming a static network, it would mean editing your /etc/network/interfaces file and using your DNS IPs in the nameserver option. Something like this:

iface eth0 inet static
    ...
    nameserver 10.0.0.1 10.0.0.103 ...

Now you should be all set.

Alexis Wilke
  • 2,057
  • 1
  • 18
  • 33
  • If attacker can potentially do something they shouldn't with your network resources, and all they are missing is the IP address... you're doing it wrong. – csauve Mar 11 '21 at 23:27