26

I am building web applications for my customer's company. At the server side, there will be 2 kinds of server to server network communication.

  1. Separated REST API servers making requests among each other.
  2. Communication from application load balancers (AWS ALB specifically) to their auto-scaling EC2 instances.

Currently all of these communications use HTTP protocol. Only the user-facing nodes (such as the load balancer or the web server reverse proxy) will serve HTTPS with valid certificates.

The customer ask us to change them all to HTTPS as thet believe that it is the modern best practice to always use HTTPS instead of HTTP anywhere.

I would like to dispute with the customer but I am no security expert. Please help review my understanding below and correct me if I am wrong.


In my view, I think the purpose of HTTPS protocol is for being a trusted channel in an untrusted environment (such as the Internet). So I cannot see any benefit of changing the already trusted channel to HTTPS. Further more, having to install certificates to all servers make it difficult to maintain, chances are, the customer will find their application servers broken someday in the future because some server has certificate expired and no one know.

Another problem, if we have to config all the application server, apache for example, behind the load balance to serve HTTPS, then what is the ServerName to put inside the VirtualHost? Currently we have no problem using the domain name such as my-website.example.com for HTTP VirtualHost. But if it were to be HTTPS we have to install certificate of my-website.example.com to all instances behind the load-balancer? I think it's weird because then we have many servers claiming to be my-website.example.com.

asinkxcoswt
  • 375
  • 1
  • 3
  • 7
  • 21
    Worth a google: zero-trust networks. You are correct that HTTPS is intended to allow secure communication in an insecure channel. You assume that your internal network is trustworthy. – Conor Mancone Mar 08 '20 at 13:10
  • 1
    Your comments about SSL certificates are just an implementation detail and are not really on topic here (although the rest is fine). Your job as a service provider is to figure out the cost of these changes so you can tell your client how much it will cost, and let them decide if it is worth it for their business. Right now it just sounds like you don't want to be bothered to take the effort, which is not really a reasonable approach. Now if these were changes being requested at the end of a release process that you won't be paid for then I would certainly refuse. – Conor Mancone Mar 08 '20 at 13:13
  • 8
    Google used to have no encryption on traffic within its data centres - and then it came to light that there was a very high probability that state actors were reading that data, so Google moved to encryption everywhere... – Moo Mar 09 '20 at 04:27
  • 5
    Google *used to* not use HTTPS internally. Then, Snowden let everyone know that the NSA had attached extra stuff to Google's system so that the NSA could see all the traffic. Now Google uses HTTPS internally. This slide here: https://www.businessinsider.com/leaked-nsa-slide-of-google-cloud-2013-10?r=DE&IR=T – user253751 Mar 09 '20 at 11:12
  • Aside from security, there's also performance reasons to use HTTPS. For example the use of H/2 (instead of HTTP/1.1), which the AWS ALB supports by default. – Matthew Mar 09 '20 at 18:55
  • 1
    @Matthew iirc there is no reason you can't use http2 without TLS if you control the server and client. – Qwertie Mar 09 '20 at 22:59
  • Probably not worth an answer in its own right, but the ServerName for the VirtualHost should be whatever name you'll put in the URI you use to connect to the server - whether that's an IP address or a hostname. If you're signing certificates with an internal CA (which I'd recommend anyway), then you can use either hostnames or IPs as subject alt names, so there's no trouble getting a cert for that name (public CAs are fussier about certs for IPs, but you won't have that problem if you're managing it internally). – James_pic Mar 10 '20 at 10:31
  • @user253751 But did it help? – Michael Mar 10 '20 at 21:10
  • 1
    @Matthew I've used HTTP/2 without HTTPS without problems. Browser vendors just decided to not support it, but there's nothing in the spec that forbids that combination. – Voo Mar 11 '20 at 08:42
  • @Michael Well, it closed *that* attack vector. Now the NSA has to do riskier stuff, like sneaking extra chips into their servers (conjecture). – user253751 Mar 11 '20 at 11:10

6 Answers6

45

The answer to your question comes down to threat modeling. Using cryptographic protocols like HTTPS is a security mechanism to protect against certain threats. If those threats are relevant for you, must be analyzed:

  • Are there potential threat actors in your internal network? Based on your question you seem to assume that the internal network can be fully trusted. This is often a misconception, because there are several ways your internal network can be compromised (e.g. valid users with access to this network are turning malicious, a systems in this network gets compromised, a misconfiguration opens up the network segment, etc.).
  • Will the architecture be subject to change? It is likely that the system will change over time and prior security assumptions (e.g. my internal network is trusted) no longer hold. If that's a reasonable scenario, it might be a good idea to build the necessary security mechanism in in advance. That's what security best-practices are for. Providing security in an area of uncertainty.
  • Is there a regulatory, legal or compliance requirement that must be fulfilled? You said that your customer considers HTTPS to be state-of-the-art / modern best-practice. The source of this friendly worded statement might actually be an externally driven requirement, that must be fulfilled. Non-compliance is a threat that should also be covered in a threat analysis.

Those are important topics worth analyzing. When I design system architectures and I am in doubt, I prefer to err on the side of security. In this case the best-practice approach is indeed using HTTPS for communication, no matter the circumstances, as long as there are isn't a considerable impact on the application (e.g. performance impact).

Difficulty to maintain server certificates shouldn't be a problem nowadays, as this is common practice. This should be part of normal scheduled operations activity.

Having said all this, there is of course additional effort required to use HTTPS instead of HTTP and it is your right to charge the customer for this additional effort. I suggest you calculate what this will cost during development and over time during operation and let the customer decide if the cost is worth the benefit.

Demento
  • 7,249
  • 5
  • 36
  • 45
  • 8
    "Difficulty to maintain server certificates shouldn't be a problem nowadays, as this is common practice". You should tell the Microsoft Azure people that one. Or anyone who had outages due to the Let's Encrypt problem just last week, or.. I could go on. No really, the amount of outages due to problems with certificates in even the largest networks shows that this is not as trivial as people like to think in theory. And the consequences are severe. One really shouldn't underestimate that factor. – Voo Mar 09 '20 at 08:45
  • 12
    @Voo For internal server-to-server authentication, there is no reason to use a public CA such as Let's Encrypt, and indeed it is likely better not to. You can use an internally managed CA for internal certificates, so there's no need to worry about renewal (you can set expiry to 100 years or whatever), you don't need to give the servers internet access, and there's no possibility that someone can trick a public CA into mis-issuing a certificate, since only certificates signed by your internal CA need to be valid. – James_pic Mar 09 '20 at 11:40
  • 8
    @James Sure, but I could name problems with that approach too.. remember when Chrome decided to require alternate subject names for its certificate validation? We had old internal certificates that didn't set those. Or about the fun of configuring node applications to use your internal root CA. And so on. My point isn't that these things are insurmountable, but that you shouldn't underestimate them. – Voo Mar 09 '20 at 12:23
  • 2
    @BoogaRoo for server-to-server communication it doesn't matter whether Apple trusts your certs or not, only if your server does. If you're using libraries that allow you to configure them accordingly (many do), you can use SHA-1, 1024-bit RSA, no subject alt names, unreasonably long expiry periods, and all kinds of stuff that doesn't meet the baseline requirements and browsers and public CAs would baulk at. Which isn't to say that you should ignore good practice for no reason at all, but depending on your threat model, convenience may be a good enough reason. – James_pic Mar 10 '20 at 14:36
  • @James_pic somehow I misread that as "1-bit RSA." – user253751 Mar 11 '20 at 18:09
  • This answer like many others still overrates https even though appearing neutral. There're network environments that the implication of being insecure as carelessly mentioned here has by far over 100 times bigger implications. In such environment there's 100% no need for https, I know you're a security expert but say it, it won't hurt you. – Chibueze Opata Jul 14 '21 at 08:21
8

Mixing and matching HTTP and HTTPS is not a good idea - you will constantly be juggling configurations.

Usually adding a component into a system should only be done if there is a very specific reason for it - just because someone thought it a good idea is not a specific reason.

I'm not saying that HTTPS is a bad idea - quite the opposite - but you have a lot of learning to do. The model you propose undermines the trust relationship that is the primary reason for using TLS in the first place. You also don't seem to have thought about how to plan your PKI.

servers broken someday in the future because some server has certificate expired and no one know

If you are providing the service, then you should be configuring monitoring for the service, including certificate expiry.

It sounds like you are looking for reasons to argue with the approach of rolling out certificates. Reading between the lines here, it seems you are currently lacking the skills and planning you need to implement this.

Yes, it's a lot of work, but that's the business model - you assess the amount of work, the skills you need to acquire and those you can buy in and you charge the customer for that. (Serge highlights the cost of the certificates - but that is the smallest cost in this whole exercise).

symcbean
  • 18,278
  • 39
  • 73
  • 2
    Using https on the internal network (or on the host-only network...) causes headache because you need to take care certificates which won't be seen anybody, but your systems will continuously have to work on unneeded TLS handshakes. You will need to manage certificates never seen by anybody. And you will still have many headaches due to the different cert metadatas. While a correctly written software should have no problem to run anywhere, including a http server which runs behind a https proxy. – peterh Mar 08 '20 at 20:57
  • You really shouldn't be making judgement on someone's knowledge based on such little information as given in the question... – ScottishTapWater Mar 11 '20 at 11:23
  • @peterh-ReinstateMonica: the infrastructure overhead, even with distributed microservices is so small its nearly impossible to measure. And implementing it in advance of when you need it is a massive saving in cost and effort. If that is not your experience, then you are doing it wrong. But these remain irrelevant to my main point - the client wants it, the client is paying for it, it does no harm. – symcbean Mar 11 '20 at 14:36
  • (at least it does not harm if it is done properly) – symcbean Mar 11 '20 at 14:43
  • @symcbean Stats show that the re-appearance of visitors depends greatly on that how many the tenth-of-seconds a page appears to them. Not surprisingly, asking from them feedbacks, they only say that the page is not enough fast, if it is really slow. Yes, tenths of seconds matter if you are interested in visits stats. Note, while an ssl handshake is typically not a big data transfer, it has many locks: about 5-10 times has to wait one sides to another. (This can be reduced by connection pooling and tls sessions, nice task to configure them everywhere.) – peterh Mar 11 '20 at 15:01
  • 1
    @symcbean There is yet another argument: by encrypting the internal traffic you lose a lot of debuggability. If you have encrypted communication, you can debug only, what the softwares are actually talking to each other, if you debug the software themselves (like apache dump_io or similar). While you can analyze http traffic without the cooperation/modification of the parttakers. – peterh Mar 11 '20 at 15:03
  • @Persistence I had no intent of negative communication. – peterh Mar 11 '20 at 15:04
  • @peterh-ReinstateMonica my comment was aimed at the answerer, not at you. Since they explicitly say "you are currently lacking the skills" which is unfounded – ScottishTapWater Mar 11 '20 at 20:40
8

Internal networks are not secure

In general, internal networks are more secure than public-facing systems, but they should not be considered as completely secure. A significant portion of attacks come from the inside - spearphishing, social engineering and insider attacks are all popular vectors which start with a foothold inside of your network.

So there's no good reason for unencrypted traffic of secret or private information even over your internal networks. You don't necessarily need public names or CA hierarchy - if you have well-defined bilateral communications channels, then it may be simpler to have an explicit trust relationship where your load-balancers are configured to trust a particular self-signed certificate of your backend servers and nothing else.

Peteris
  • 8,369
  • 1
  • 26
  • 35
  • 1
    Also: remember that you can use technologies like certificate-based VPNs on internal networks as well as outward-facing ones, and the benefits are the same. You can wall-off portions of your internal network so that they cannot be accessed except by those internal systems which possess one-of-a-kind crypto credentials that only your company can issue. The nice thing about VPNs is that they secure *everything* ... and yet they do it *transparently* to the clients that are employing them. It adds one more strong layer of accountability, easily and manageably. – Mike Robinson Mar 09 '20 at 21:03
  • The title of this answer is misleading, '*A network that has not been explicitly secured is not secure*' is a better phrase. A network being internal has nothing to do with its security so you make the same error by using the opposite statement. Internal networks should be secure and if they're not or they can't be, that's a whole different ball game. If you have to use https in your secure internal network, you have way bigger problems. – Chibueze Opata Jul 14 '21 at 08:26
5

As a professional, you owe advises to your client, but should not take the decision yourself.

The arguments to present to your client are:

  • what is the gain in using HTTPS inside the server network? If this network is isolated from any other system and only sysadmins can access it, you may argue the the gain can be neglected because it is just protecting a system against someone which already has admin priviledges. If other staff members with no admin priviledges can access it, the gain is not null, neither it is is systems for other clients can.
  • what is the risk to do it? The disgression on certificates is mainly... a disgression. But the fact is that HTTPS is a more complex protocol that HTTP is, and any added complexity adds risks for implementation errors. If the previous step concluded that the gain is neglectable, that is enough to advise the client not to do.
  • what would be the added cost? Here you have to considere direct and indirect costs. Direct costs could include the price of additional certificates if you use external ones, or the time for creation of private certificates if you use a private PKI. They would also include the time for the configuration of the system. And they should include the maintenance times as recurrent costs, including a programmed renewal of certificates - this part is in you responsability domain, but you can charge your client for the time. Indirect costs are harder to establish but you should use your own experience to evaluate the risk of errors due the the added complexity and their possible consequences. And IMHO you may charge your client for that if they insist in not following your advices.

But when you have said that all, the client is responsable for the decision.

Serge Ballesta
  • 25,636
  • 4
  • 42
  • 84
3

Encryption is cheap. Data leak or data loss are not.

Use encryption between servers (and it is even better to use TLS auth between servers).

And when I say cheap, it is cheap even considering the management of the keys and the certificates. It may be reasonable to issue a self-signed and long-lived certs to both servers.

There are a few exceptions to the rule:

  1. Either the client or the server are legacy having known SSL/TLS vulnerabilities. It is always better to update the vulnerable code, but we all know it is not always possible. It is sometimes better (still not good, but better) to disable the vulnerable code entirely, run plaintext and mitigate the risks in one way or another.

  2. You are exchanging an insane amount of data and/or need an insanely low latency. The encryption may become a bottleneck and/or a resource hog. You may opt to no encryption and also do something else to secure the thing.

fraxinus
  • 3,425
  • 5
  • 20
0

"https" not only secures the communications as it passes over the network, but also verifies the certificate that is presented by the server. This enables you to know that you really are talking to the correct site. And this, in my opinion, is the real advantage of "https."

"letsencrypt.org," which issues signed certificates for free, has a lot of material on their web-site which discusses these benefits. They argue ... quite rightly, I think ... that "everything should be https, whether the material is actually sensitive or not."

(mod_ssl et al are also capable of enforcing certificate possession on the client side, too, although this is rarely done. For a secure internal application, however, you might want to do such a thing. The server is then able to restrict which computers are capable of connecting to it at all, restricting them by the credentials they must possess.)