0

Dilemma

We are setting up a docker registry server for our company as per these official indications: https://docs.docker.com/registry/deploying/#run-an-externally-accessible-registry.

We are evaluation what option to implement:

  • More secure, less practical: Put the server in a private net behind a firewall, accessible via ssh, and then have developers to manually open a tunnel (for example a SSH tunnel) from their homes over the world everytime they want to use it.
  • More practical. Question is: More insecure? Placing the docker registry server in a public IP and protected with TLS. Developers just "push" there over a TLS channel. TLS will be build on top of keys/certificates CA'd by Let's Encrypt with a domain-verification.

For this question let's assume there are no home-residential-IP ranges allowed from the home of the developers and let's assume the server being reachable from 0.0.0.0/0

On-topic, Off topic

I already know I can improve the security by firewalling residential-home CIDRs assigned by the ISPs to the remote developers and so. That's not the topic of this question.

I want to focus the question on having a server with private data running in a public IP on a public port, and using TLS over it.

Affraid

I'm a bit affraid of that. I usually had always all my ports closed to the public except for 80, 443 and 22. To administer we usually allowed SSH from anywhere without problems.

For example if I had a MySQL I had 3306 only "from the inside" and we had to connect first via SSH and then use the SSH tunelling to connect to it.

Putting a MySQL against a public IP and protected "only" with TLS goes against my inctuition. But maybe it's as secure as having the SSH and I didn't know.

Previous investigation

I see that properly configured TLS is as secure as SSH (see similar questions here: Is connection established with two way SSL as secure as SSH?).

There is also extensive documentation on the version of TLS and so: https://en.wikipedia.org/wiki/Transport_Layer_Security

The new v1.3 finished on 2018 seems to be "ultra secure": https://www.ietf.org/blog/tls13/ can I blindly rely on it?

Questions

The word that scares most me is "properly" configured.

1) How can I know if this combination:

...how can I know if this combination protects me as much as an SSH connection does?

2) Is TLS 1.3 "Much" better than 1.2?

3) How can I know if my setup is using TLS 1.3?

4) Provided the registry will have users/passwords, setting TLS is "secure enough"?

5) Does this mean that on a hurry, if I had a DB exposed to the public but running on a TLS would it also be as secure as having an SSH there?

Xavi Montero
  • 123
  • 4

2 Answers2

1

TLS only protects the communication between client and server against sniffing and modification. This means that it does not make an server side application magically secure against attacks from a malicious clients. It also does not make any existing authentication methods stronger, i.e. weak password will still be weak passwords and they are as easy to crack as without TLS. TLS can add a way more secure authentication method though in the form of client certificates.

Based on this the more secure approach is your first proposal, where the user first needs to authenticate against the SSH server (hopefully key based authentication and not passwords) and then can access the docker registry server. This is more secure since it hides everything behind an additional and strong (if key based) layer of authentication.

With proper authentication like client certificates the web interface of the docker registry could also be made public, but then you loose the protection of the additional SSH based authentication layer. Anything except the web interface should not be made accessible from outside, i.e. no database etc. But as far as I know this is also not needed for a docker registry.

Steffen Ullrich
  • 184,332
  • 29
  • 363
  • 424
  • Oh, silly me! Sure! How could I misunderstand such a basic thing! In ssh you use your "private key" that needs to be "accepted" in the daemon. In TLS you secure the channel but there's nothing about "who is there" more than "the one that is there is the one that has been there from the beginning". So... of course... unless the application uses a private-key system not based in passwords, ssh (using keys, not passwords) is, of course, the way to go. Unless I could manage the docker registry to handle "auth" via keys (which IDK if it's possible) then TLS alone is less secure. Got! Thanks! – Xavi Montero May 01 '20 at 18:58
1

1.

TLS, as it is used most often, has server-only authentication but does not have client authentication. For web applications that have client authentication (e.g. gmail), client authentication is done by the application independently of TLS, by, for example, sending the password from the client to the server inside the TLS tunnel. But the password can be weak and can be stored in cleartext and can be sniffed by JS code loaded in the login page, and TLS won't protect you from that.

SSH, as it is used most often, does server authentication and client authentication in the SSH handshake.

If you configure TLS with client authentication by issuing your users certificates signed by your internal CA, and requiring the TLS terminator to verify the client certificates are indeed signed by your internal CA (things like nginx and Apache can do this), then you get the same security as SSH, in theory.

If you compare SSH server+client cert auth against TLS server+client cert auth, since when they are correctly configured they use almost the same cryptographic algorithms (it's easier to use more modern ed25519 keys and signatures in SSH than in TLS), you basically compare "what is the chance of a bug in my server and client software?", and if both are developed by competent teams, I would say the software which has fewer features and where security is closest to being number #1 most important feature (ideally, the only feature), that software would be more secure. So if I had to bet, I would bet openssh is more secure than Apache or nginx or IIS, just because Apache and nginx and IIS do so much more.

2.

TLS 1.2 configured to only support the same cipher suites that are allowed in TLS 1.3, even when it uses RSA PKCS#1v1.5 instead of RSA-PSS (PKCS#1v2.2), is probably ok. If you allow the cipher suites that are not allowed in TLS 1.3, it is less safe. So only allow TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 and TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256.

You would want to use a major implementation of TLS like OpenSSL (or BoringSSL) or MS SChannel, not a minor implementation because those receive far less scrutiny.

3.

openssl s_client, or testssl.sh or Qualys ssllabs or equivalent.

4.

Passwords are not very secure. User generated passwords are terrible. Computer generated passwords that are long enough and random enough to be keys (e.g. 30 bytes read from /dev/urandom encoded in base64 to 40 chars) are better, but authentication that does not transmit the password is better than authentication that does transmit the password, that is why SSH password based authentication should be disabled and everyone recommends you use keys or certificates (certificates are just keys additionally signed by someone to say "I say this key belongs to this person"). So HTTP Basic-Auth over TLS with computer generated passwords is secure, but TLS client certificates are more secure, TLS with client certs with keys stored in yubikeys are more secure, and I believe SSH is more secure because of the number of features argument.

5.

If your DB is exposed to the internet but the connection requires TLS, but TLS client auth is not required, then then only advantage of TLS is that the password is not sent in the clear. Though many databases have some kind of HMAC based password based auth scheme which also avoids sending the password in the clear even over clear channels.

Anyway, by exposing the database server's auth system to the internet, you completely rely on the database auth system to not have any default passwords on system accounts, to not accidentally have anonymous guest accounts in the database, etc. Databases are never supposed to be deployed this way, because none of them are really secure in this configuration, despite all of them supporting this.

Why are databases insecure, even though they all have features that should make them secure? In general, security is only good in products whose only job is to do security. If the product does something and also has security features, the security features suck. So zip or spreadsheet + built-in password protection is not as good as zip or spreadsheet + separate encryption software that only does encryption. Database + built-in TLS support is not as good as database + secure tunnel provided by something that only provides secure tunnel. User account separation is not as good as server separation. Etc.

A TLS setup that requires TLS client auth using user certificates is better than TLS server-only auth + password auth by the application, but a database that is only accessible on its localhost and requires SSH key/certificate auth to reach it is even better.

Z.T.
  • 7,768
  • 1
  • 20
  • 35
  • Thanks Z.T. As I commented to Steffen you both pointed out my great misunderstanding: Both ssh and TLS crypt the channel. But only ssh "in a normal way" is requesting client-certs while TLS in a typical setup is not. That was my missing piece. I am going to select your answer as the valid one, because although Steffen also signals that TLS can request client certs, you are much more explicit in how those applications handle them. You do a good point signaling that "the only purpose of ssh" is "security" while other applications have other focuses and TLS is an addon, not the primary focus. – Xavi Montero May 01 '20 at 19:15