135

This question has been asked several times, I'll link a few:

However, these are old questions and I know that, at least in some respects, the answers provided are out-of-date. I'm providing my research in the hopes that I can find out what parts of it are out-dated and what are still good. I'm looking for modern answers.

The question is straightforward, but here's the situation. I've got web services that access sensitive information (for perspective: it would be bad for our company and our customers, but probably not life-threatening to either). I want to provide easy access to these systems, but security is obviously important. (Some of these services are accessed via non-browser clients, as webservices. Some are simply web pages that humans use from modern browsers.)

To that end, we're keeping the most important services accessible only internally. This way, an attacker is a) unlikely to find them accidentally, and b) has one extra obstacle to overcome.

“Security in depth”, however, would suggest that we also encrypt the traffic internally as well. This makes it even more difficult for someone inside the network to casually sniff passwords off the wire.

In the spirit of “doing my homework” I've gathered several options, some less pleasant than others. I am not entirely certain which of these are even feasible now, because I know that some of the rules have changed.

Self-signed Cert

This seems to be the “go-to” answer for this question most of the times it's asked. This provides resistance to the casual sniffing problem, but I'd have to tell all the employees “the big nasty warning page is fine, just click through it”. I've been working hard to train them to never ignore that page, and most of them wouldn't understand the nuances of “the page is sometimes ok, when you know and expect the site to be using a self-signed cert” (they tend to stop listening at the comma :) ).

Self-signed Cert, but installed and trusted on each system

This fixes the nasty page, I think, but I'd have to set it to be trusted on each and every computer that accessed it. That's possible, but undesirable.

Also, I'm not 100% certain this is even possible for internal addresses any longer. Its my understanding that modern browsers are supposed to refuse any certificate that purports to certify a machine that resolves to a non-public IP address or is not Fully Qualified.

Additionally, the process for installing system root certs on mobile devices is... basically impossible. It requires a rooted Android phone, and a relatively arcane process for iOS devices.

Set up my own CA

Basically the same problem with installed and trusted certs, but I only have to install the root cert, not one cert for every system I stand up. Still basically shuts out all the mobile devices, including the tablets managers like to use in meetings.

I'm even less convinced this would work, because the IPs will be internal and the DNS will not be Fully Qualified.

Use public certs, but for internal addresses.

We could configure fully qualified addresses (like “secret.private.example.com”) to resolve to internal addresses (like “192.168.0.13”). Then, we could use certificates for those public DNS names (possibly even wildcard certs for “*.private.example.com”).

This might or might not work, and it might not even be a good idea. Browsers might reject the cert for trying to resolve to a local address.

If it does work, it might not be a good idea. A laptop taken from our internal system to another network (possibly very quickly, via a VPN disconnect) could attempt to access secret things at local IP addresses on other networks. As long as HSTS is in force, this shouldn't be a big deal, because without our private key they won't be able to convince the client to accept the forged site and won't be able to force a downgrade to HTTP. But it seems undesirable, and it breaks the “no private addresses in public DNS” rule that (maybe?) has gone into effect recently?

So, all this to say... what's the right answer? I could just “give up” and expose the private services publicly, trusting that the encryption and application-specific authentication requirements will keep things safe. Indeed, for some less sensitive things, this is a fine course. But I worry that application-specific authentication could be buggy, or credentials compromised in other ways (people are always the weakest link), and I'd prefer there to be an additional obstacle, if at all reasonable.

alficles
  • 1,451
  • 2
  • 9
  • 4

9 Answers9

33

Certificate validation is done to make sure that the peer is the one you expect. Validating a server certificate in the browser is mainly done by checking that the hostname from the URL matches the name(s) in the certificate and that you can build a trust chain to a locally trusted CA certificate (i.e. the root certificates stored in the browser or OS). Additionally there are expiration and revocation checks.

To make sure that this process works as intended it is essential, that the issuer of the certificate validates that you actually own the hostname(s) in the certificate. This means that you can only get a certificate issued by a public CA if you can prove that you own this name, which usually involves that the host with this name is publicly visible - at least for the cheap certificates where automatic validation is done. Of course you could work around this problem by actually having the hostname available in public (i.e. single server which is accessible with *.example.com) and then use the issued certificates for your internal system.

(Mis)using a public CA for internal systems this way is kind of ugly but should mostly work. You still need internet connectivity because revocation checks will be done using information from the CA, i.e. from outside your local network. This can be somehow limited by using OCSP stapling but not all clients and servers support this. If you don't allow your clients to ask the external CA for revocation information, then connection setup might be slow sometimes because the client is unsuccessfully trying to get the revocation information.

Thus using a public CA should be possible, but with drawbacks and it feels like playing with the CA system in the wrong way - so I don't recommend it. The only other alternative when dealing with lots of certificates is to use your own local CA. But like you realized yourself this means installing and trusting the root certificate on all your local systems (this can be automated for some systems). This also means that you really have to protect this local CA against misuse, because it is not limited to issue only local certificates but could in theory also issue certificates for any public domains like google.com. And since all your local systems trust this CA they will trust this certificates too. Also, certificate pinning will not be checked in most systems when the issuer CA was explicitly added as trusted. This makes it easy to misuse the CA for legal or illegal SSL interception (i.e. man in the middle attack).

Which means that there is not a really good option: either misuse a public CA for internal hosts with all the drawbacks or create your own CA with all the drawbacks. I'm sure there also will be public CA which will help you with this issue but this will probably be more costly than you want.

Steffen Ullrich
  • 184,332
  • 29
  • 363
  • 424
  • 5
    Excellent answer, as usual. +1. I suppose this is why the management interface on almost all home routers is accessible by HTTP only, and not HTTPS. – mti2935 Apr 19 '21 at 13:35
  • The only issue I can't live with is : https://stackoverflow.com/questions/43862412/why-is-brotli-not-supported-on-http ... if brotli worked on http I wouldn't be here in the first place :D – yota Oct 04 '21 at 16:51
  • What about getting a wildcard certificate, e.g. `*.example.com` where you already serve public content from `public.example.com`, and then you serve split horizon content out of `private.example.com`? Wouldn't that work? – Kevin Jul 17 '22 at 17:49
  • @Kevin: Yes it will work and is basically a variant of what I've described in my answer. To cite: *"Of course you could work around this problem by actually having the hostname available in public (i.e. single server which is accessible with *.example.com) and then use the issued certificates for your internal system."* – Steffen Ullrich Jul 17 '22 at 18:10
16

"Use public certs, but for internal addresses."

This option works quite well, that's what we do.

You can actually do HTTP validation, the certificate does not include the IP address, just the DNS name. So you could point your DNS to an external service, validate, and then point it at an internal IP.

But DNS validation works better if your CA supports it because you don't have to change the entry every time you need to do a renewal. All an SSL certificate does is validate that you control the name, so if you can create subdomain entries, you obviously control the name.

Instructions for letsencrypt: https://serverfault.com/questions/750902/how-to-use-lets-encrypt-dns-challenge-validation

Update: better instructions for a much nicer mechanism: https://jsavoie.github.io/2021/06/01/letsencrypt.html

Bryan Larsen
  • 281
  • 2
  • 5
  • _"you could point your DNS to an external service, validate, and then point it at an internal IP"_ - How to do that? Does this require some configuration of the network router? – tom Dec 14 '18 at 01:15
  • You can spin up an AWS instance or anything similar for just long enough to install certbot and validate your cert. – Bryan Larsen Dec 14 '18 at 13:24
5

Have been working on this myself too, and just implemented https on some internal web applications.

First on our internal DNS server I setup a new forward lookup zone, corp.domain.com And then setup some A/Cname records within that, webapp1.corp.domain.com pointing to our internal LAN IP

Now we already own a wildcard SSL cert for *.domain.com, so with my cert request, I created a duplicate SSL cert for webapp1.corp.domain.com

Installed this into our IIS and within the browser it showing as SECURE :)

ArtR
  • 51
  • 1
  • 1
5

Best solution that I have discovered so far is using a reverse proxy such as Caddy to handle the certificates(issue & renewal), set up a DNS server to point all internal hostname to Caddy server and everything works automagically. If you want to limit access to internal only, you can close port 80 until your certificates requires renewal.

Gyver Chang
  • 51
  • 1
  • 1
  • You don't really need caddy for that, you only need a bind (or any DNS) which makes for the CA, *and for it*, looking all your `*.intra.yourcopany.com` hosts as your external http proxy. – peterh Apr 30 '19 at 09:06
2

I believe the requirement for public IPs and FQDNs is shouldered by the CA's, not the browsers, so you should be OK (at least, unless someone has better information, you can set up a test in an afternoon and confirm one way or the other).

This rules out public certs at a minimum. Of your remaining three options, the best user experience is brought by creating your own CA, and that's also the most trusted option as well since it provides the full benefit of authentication that the other options lack.

For what it's worth, you can install root certificates on Android (v4.x and later) and on iOS (as you say, a relatively arcane process), though no doubt that's a pain.

The only other real option is to get a public IP and set up DNS with a public domain name, and just limit the heck out of access to it, perhaps through VPNs or the like.

Using a self-signed cert that the users accept upon each use should be the absolute last resort, as you noted that's just training them to do the wrong thing.

Jason
  • 1,907
  • 2
  • 9
  • 15
  • 2
    “and just limit the heck out of access to it, perhaps through VPNs or the like.” I'm not quite sure what this means. Usually, I use VPNs to access private spaces. How would I use a VPN to access a public IP I couldn't already access otherwise? – alficles Apr 21 '16 at 20:31
  • 1
    That's a fair question, and perhaps that's not the correct way to limit access in such a way that you are still accessing the site via its public addressing. The perhaps most "conventional" method to limit access is some combination of source IP restrictions and/or client certificates (the latter of which doesn't ease your administrative burden much). – Jason Apr 21 '16 at 21:19
  • 1
    Public CA's will no longer include private IP in a certificate (or any IP at all nowadays) but this does not matter because you need a certificate with a hostname only. And to which IP address this name resolves does not matter for certificate validation and definitely not for certificate issue. The only requirement is that you need to prove that you own this name and this might be a challenge if this name is only used inside an internal network. – Steffen Ullrich Apr 22 '16 at 05:01
2

Use public certs, but for internal addresses

This is a good option when using DNS validation, but it has a couple downsides, depending on your requirements:

  1. DNS management often lives in a very different place from where you need certificates (or with a different team!)— meaning you'll need all your ACME (ie. Let's Encrypt) clients to have access to DNS, or you need to have one ACME client in a central location and figure out how to get the resulting certificates to all your endpoints.
  2. All certificate names issued by public authorities will be listed in public Certificate Transparency logs— this point was omitted by other replies but can be very serious for organizations that want their internal domains to remain private. The public, including competitors, can easily perform a quick search to find super-secret-internal-project.my-company.com.

As the ecosystem has evolved quite a bit since some of the other replies, it's now relatively straightforward to run your own CA, using tools like step-ca, cfssl, or openssl.

If you still want to use the ACME standard, the best option is to run a registration authority (RA) inside your network that can respond to the ACME HTTP challenges, but then proxy your request to your own upstream CA outside the network. This means full automation, proper ACME, and a CA you can share across networks. Smallstep's open source step-ca can run in RA mode to do this. You could also just run step-ca local to the network without an RA if you don't need to reach across networks.

The downside, of course, is that you need to distribute the root certificate to users' trust stores so they don't get browser warnings and other verification failures, but many teams just do this with their existing MDM solution or using step certificate install root.crt in provisioning scripts.

  • Does this work on an air-gapped network that has no bridge to any public network? – Daniel Glasser Oct 23 '21 at 14:46
  • @DanielGlasser, yes, absolutely. The simplest solution in an air-gapped environment is just to run an instance of `step-ca` on that network. As long as all your services that need certs have network access to `step-ca`—regardless of ingress or egress to the public—you're good to go. – Alan Christopher Thomas Oct 25 '21 at 04:33
0

Buying wildcard certs has been available since the early 2000s. For companies I've worked with, if they needed 3 or more hostnames, we bought wildcard certs. There's absolutely nothing wrong with sticking the *.example.org cert on public.example.org and super-secret-private.example.org. The big limitation was that it only works for one level. If you have hostnames like something.region.example.org, it won't work right with the *.example.org cert. You have to buy the *.region.example.org wildcard cert.

No, you shouldn't buy a cert for super-secret-private.example.org, and you shouldn't use your secret hostname as an alias. That's what wildcards are for.

People have been using split-dns for a really long time, to have different servers serve different content, depending on what your IP is, or the DNS server you're resolving from. That has worked since DNS started being a thing.

What I think someone eluded to, no, you aren't locked into a specific IP. You're probably thinking of the problem that SNI fixed, somewhere between 2006 to 2009. No one cares if foo.example.org resolves to 1.2.3.4, 5.6.7.8, 9.10.11.12 (DNS-RR), but the IPs resolve back to servername.example.net. When you're running hundreds or thousands of domains on one IP, there is no way to make any of those IPs resolve back to all the names that use it. Or at least I've never bothered to, because it isn't a problem that needs to be fixed.

A while back, I added a A record of fw.example.org to the DNS of a couple domains we use a lot. It resolves to 192.168.1.1. That lets us use real SSL certs on all the firewall appliances. While I wouldn't give the certs out to everyone to use at home, the technical people do. If nothing else, it just clears up the browser warnings.

Since January 2018, getting wildcard certs has been available for free through Let's Encrypt. It isn't as easy as for most, where you can just stick the token on your web page. For the wildcard certs, you need to control your DNS. Specifically, be able to push TXT records into your zones on demand, in between steps in between starting the ACME request, and telling them that it's ready. Most people are familiar with the HTTP method (Let's Encryt HTTP-01 challenge). To get wildcard certs, you have to use the DNS method (Let's Encrypt DNS-01 challenge). Or at least it was required last time I did any work on our scripts.

I have been using dehydrated on Slackware with this Slackbuild. It was easiest for me to use with our other things.

This is the relevant snipped from my hooks.php file. It runs on the machine sending the requests out, and pushes the changes up to the DNS servers. I have been doing this since 2018, when we switched everything over to Let's Encrypt.

foreach ($dns_servers as $server){

  $update_lines = "
     server $server
     zone $domain
     update add _acme-challenge.$domain. 60 IN TXT \"$token_value\"
     send
     answer
     quit
  ";

  file_put_contents($tmpfile, $update_lines, LOCK_EX );
  // send request
  $res = `nsupdate -k $bind_key $tmpfile 2>&1 | grep -v ^\;`;
  #logger("\n===\n!!! Send Update Add\n" . $res);

  if (strpos($res, 'NOERROR') !== false) {
     logger("Update Add Successful\t\t$domain @$server");
  }else{
     logger("Update Add Failure\t\t$domain @$server, $res\n");
  };

  // verify request
  $res = `dig TXT @$server _acme-challenge.$domain | grep -v ^\;`;
  $res = ltrim(rtrim($res));
  #logger("\n===\n!!! Verify Update Add\n" . $res);

  if (strpos($res, "$token_value") !== false) {
     logger("Update Verify Successful\t$domain @$server");
  }else{
     logger("Update Verify Failure\t$domain @$server, $res\n");
  };
};

That's PHP because of someone who sometimes maintains code there. It's just run locally via a cron, not from a web browser. It doesn't even live on a server with a web server. Things like that should be run on an internal server, without a public IP.

For a while, way back, I did make self-signed certs for every non-public facing thing. I had even started to set up my own CA, that I could trust on all of our client's machines. It was fun to try, but was more trouble than it was worth. There has been no reason to do that in years.

So, please get your wildcard SSL cert, and do put it on everything that you use. That's the right way to do it, not self signed certs, or making your own trusted signing authority. If you do it well, and set up a system to send the updated certs out to the relevant machines, you'll always have good certs. It's not that hard. If you can't figure it out, hire someone who is competent enough to write a few simple scripts for you.

JWSmythe
  • 1
  • 1
0

If you want to run your own internal, private PKI, check out https://pkiaas.io.

  • 1
    Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Mar 10 '22 at 07:57
0

After a long search I finally managed to create SSL certificate for my local network.

I used SmallStep.com

Installation:

Install step

wget https://dl.step.sm/gh-release/cli/docs-ca-install/v0.20.0/step-cli_0.20.0_amd64.deb
sudo dpkg -i step-cli_0.20.0_amd64.deb

Install step-ca

wget https://dl.step.sm/gh-release/certificates/docs-ca-install/v0.20.0/step-ca_0.20.0_amd64.deb
sudo dpkg -i step-ca_0.20.0_amd64.deb

Now Initialize a Certificate Authority

step ca init --name "Local CA" --provisioner admin --dns localhost --address ":8443"

you need to enter a password and then you will get a result similar to this:

✔ Root certificate: /home/mhefny/.step/certs/root_ca.crt
✔ Root private key: /home/mhefny/.step/secrets/root_ca_key
✔ Root fingerprint: 1d2817edc4ace09f727babb020ff4e9f54bd1b9251530c687b210e56cf1f5d44
✔ Intermediate certificate: /home/mhefny/.step/certs/intermediate_ca.crt
✔ Intermediate private key: /home/mhefny/.step/secrets/intermediate_ca_key
✔ Database folder: /home/mhefny/.step/db
✔ Default configuration: /home/mhefny/.step/config/defaults.json
✔ Certificate Authority configuration: /home/mhefny/.step/config/ca.json

remember fingerprint and the paths.

Now assume you have a domain called mylocalnetwork.local

lets generate a certificate for it:

step ca certificate --offline mylocalnetwork.local foo.crt foo.key

either use a local DNS or just add the domain name to /etc/hosts so that it can be translated to machine IP.

The ONLY missing thing is to generate the root certificate that is used as Authority Certificate in Google Chrome.

step-ca $(step path)/config/ca.json

and from another terminal run:

step ca root root.crt

Add root.crt to chrome and other browsers you will use to access your website.

Create a website and use foo.crt foo.key as SSL certificates and key respectivly.

on your browser write: https://mylocalnetwork.local

and your are DONE!

M.Hefny
  • 101
  • 1
  • Please point out why you would recommend using this (probably proprietary) software over known and trusted OpenSSL and similar software. – Sir Muffington Jul 17 '22 at 16:29
  • The software is open source. I have mentioned the steps here because this is the steps I had found after many searches. and steps are straight forward and works. – M.Hefny Jul 18 '22 at 04:41