50

I have a small number of different workstations (plus client devices like iPhone) that I use for to connecting to numerous servers using SSH.

Originally when I learned about PKI, I created a single key pair on my workstation, which I promptly started using from everywhere (meaning I copied it to my laptop, etc), and used it for connecting to all of my server accounts.

Since then, I've developed the opinion that this is akin to using a single password everywhere, and I might need to revoke access selectively, e.g. if one of my workstations is compromised.

Basically I'm trying to develop a personal strategy for dealing with key pairs:

What is the right way to think about key pairs securely for general use, while remaining pragmatic for usability? (i.e. I don't think single-use key pairs for every connection point necessarily makes sense either.)

Is it wrong for a key pair to represent a personal identity globally from all access points ("me-everywhere-id_rsa") as I did in the past, or is my current thinking that a unique client-user combination should always be represented by its own key ("me-on-my-personal-laptop-id_rsa") more appropriate?

Do you reuse the same key for connecting to all servers, or under what conditions do you consider minting a separate key?

Andrew Vit
  • 825
  • 1
  • 6
  • 9
  • Related: [Good practice to use same SSH keypair on multiple machines?](https://unix.stackexchange.com/questions/27661/good-practice-to-use-same-ssh-keypair-on-multiple-machines) unix.SE 27661 – n611x007 Aug 04 '15 at 12:00

4 Answers4

21

You already saw the main point: if one of your machines is compromised, the private key contained on this machine must be "revoked": you must configure all servers on which you connect to reject further authentication attempts using that key (i.e. you remove the corresponding public key from the .ssh/authorized_keys of the servers). There is a mitigation technique, which consists in securing the private key with a password (or passphrase). This comes with a price, namely that you have to type the password (ssh-agent can be quite handy for that); on the other hand, this may temporarily deter the attacker from obtaining the private key (this depends on the kind of compromise, but if it is a theft of a complete machine -- a plausible scenario with mobile devices -- then the password will prevent immediate access to the private key and give you some time to reconfigure the servers).


One may note that "PKI" means "Public Key Infrastructure" and SSH is rudimentary in that respect. In a full-blow PKI, certificates would be used, with delegation, and centralization of revocation. With certificates:

  • you would create one CA key pair, stored safely somewhere;
  • for each client machine, you would obtain a fresh key pair, the public key being signed by the CA;
  • the servers would be configured to automatically accept any public key signed by the CA (they would know the CA public key, but not the individual client machine keys);
  • the CA would maintain and regularly publish revocation information, i.e. the list of public keys which must no longer be accepted even though these keys are signed by the CA (revocation information must be pushed to the servers, or pulled on demand).

The great thing about certificates is that you centralize decisions, which, in practical terms, means that if you connect to 20 servers from your client machines, and then add a new client machine, you do not have to manually push the new public key to all the 20 servers.

OpenSSH, since version 5.4 (released March 8, 2010), has some support for certificates; see the eponymous section in the man page for ssh-keygen. The OpenSSH certificates have a much simpler format than "usual" X.509 certificates (as used with SSL). The simplification also has a price: there is no centralized revocation support. Instead, if a private key is compromised, you still have to blacklist the corresponding public key on all the servers, and the blacklist is, in OpenSSH, a whitelist (option AuthorizedPrincipalsFile in sshd_config) which is, normally, under the control of root. So revocation does not work well and you still have to manually configure things on each server whenever you create or loose a key, the exact inconvenient that PKI was supposed to abolish.

You could still make time-limited keys, because OpenSSH certificates can embed time-limited keys. In that case, you would give to each client machine a key which is good for, say, one week. With keys which die after a week, and a CA public key known by the servers, you have the main CA goodness (no need to push anything on all the servers when a new machine is added to the client pool), and if a private key is compromised, the damage is "limited", assuming you had a key password which, presumably, would resist one week of cracking (but this will not work if the compromise is an hostile takeover with a key logger). Time-limited keys can grow tedious over time, and they assume that all servers have a correctly set clock, which is not always a given (it would be unfortunate to be locked out of a server because the server clock is wildly off, and resetting the clock requires SSH access to the server...).

An additional issue with SSH is that it is easy to add public keys on each server. This means that if an attacker compromises a private key and gains access to a server once, he can add his own public key to the .ssh/authorized_keys on that server, and no amount of revocation will fix that. Revocation is, by nature, an asynchronous process. Hence, it does not implement a sufficiently strict damage control.


Given the shortcomings of SSH support for certificates, there is no reasonable scheme which avoids having to do some configuration on all the servers in some situations. So you have basically the following choices, when you add a new client machine:

  1. You copy the private key from another client machine (that's what you are doing right now). This requires no extra configuration on the servers.

  2. You create a new key pair for that machine. You have to push the public key to all the servers.

In the advent of a key compromise, you must connect to all the servers to remove the corresponding public key from all the .ssh/authorized_keys. This is unavoidable (to avoid it, you would have to use certificates, and SSH is not good at certificates, see above). Then, if you used choice 1, then you must also create a new key pair and push it to all the client machines which were using a copy of the compromised private key.

Hence, choice 1 entails less configuration work than choice 2 in the normal case, but things are reversed if a key compromise occurred. Since compromises are normally rare events, this would favour choice 1 (which is what you already do). So I suggest that you protect your private key with a strong password, and copy it to all your client systems.

Note: in all of the above, I assumed that you wanted to connect to a list of servers with SSH, and you wanted to access any of the servers from any of your client machines. You might want to restrict access, such as accessing a given server only from one or two specific client machines. This can be done with multiple client keys, but configuration complexity increases quadratically.

Tom Leek
  • 168,808
  • 28
  • 337
  • 475
  • > *"The great thing about certificates is that you centralize decisions, which, in practical terms, means that if you connect to 20 servers from your client machines, and then add a new client machine, you do not have to manually push the new public key to all the 20 servers."* Centralized granting of access can be done easily with SSH keys without certificates as well. If a new client needs to be added, it suffices that admin selects desired subset of available servers that he wants the client to have access to and run a simple script from a controller server. – Ján Lalinský Apr 13 '17 at 19:44
  • > *"you must connect to all the servers to remove the corresponding public key from all the .ssh/authorized_keys. This is unavoidable (to avoid it, you would have to use certificates, and SSH is not good at certificates, see above)"* Using certificates, the work that has to be done is the same - the CA server needs to update the servers on the revoked certificates. I think for access authorization, certificates have some disadvantages compared to using SSH keys. It is more complicated to control which server is accessible to which client. – Ján Lalinský Apr 13 '17 at 19:48
  • "Since compromises are normally rare events, this would favour choice 1" - isn't preference supposed to be determined by the *combination* of probability and the consequences of the event? Key compromises might be rare, but if it takes you even a couple of hours (let alone days or weeks or worse) to revoke keys, a really competent or reckless-but-lucky attacker can spread through your system like wildfire. What's the exchange rate between the greater possibility of far more severe system compromise vs. occasionally creating a new key pair and spreading the public part to a list of hosts? – mtraceur Jun 05 '18 at 04:40
18

I favor having one key per authentication realm. Thus, for every desktop or server machine at work (all quite physically secure and under the control of a single group of administrators), I have one private key. I use the same private key on my new home PC as my old home PC. I use different keys on laptops and other mobile devices. This approach allows fine-grained risk management: I can revoke keys separately, and limit the possibilities for my accounts to be involved in an escalated infiltration by not authorizing certain keys on certain machines (I don't do that much, but it's an option for the paranoid).

As long as you don't do anything complicated, copying the public keys is not that difficult. You can keep one big list of authorized keys and synchronize it everywhere; enrolling a device means adding one item to that list and pushing the change. This isn't much more work than pulling the private key, if you have a central data repository in the first place.

Note that while the most obvious advantage of having separate private keys is to be able to revoke one separately, they also have an availability benefit: if a system administrator on machine A decides to revoke the key of the C machine that you're on, you may still be able to find some machine B that still accepts C's key and whose key is accepted by A. This does seem rather esoteric, but this happened once to me (after Debian realized they weren't seeding their RNG, I was glad not to have to depend solely on a blacklisted key) whereas I've not yet had to revoke a key in an emergency.

There's even a theoretical benefit in keeping a separate key pair per machine pair, in that it allows separate revocations. But the benefit is extremely small, and the management is a lot more difficult (you can no longer broadcast authorization lists). Simplifying key management is a critical advantage public key cryptography over shared secrets.

Gilles 'SO- stop being evil'
  • 50,912
  • 13
  • 120
  • 179
  • 2
    Your strategy involves copying private keys. For example from your old home PC to you new one. This is a security risk. It is more secure to never move keys outside a machine, except for secure backup. Use a separate key per realm AND per machine. Yes, much more keys to manage. – dolmen Jan 20 '15 at 10:37
  • 1
    @dolmen From my old home PC to my new one, I cloned my home directory. Machines in the same realm includes cases like shared home directories over NFS where sharing SSH keys is inevitable. If you're worried about copying private keys between two machines then they aren't in the same realm. – Gilles 'SO- stop being evil' Jan 20 '15 at 13:08
  • I think the important nuance here is *method of copying* - if you're connecting directly over a dumb wire and transmitting from one box to another, or copying it through untrusted media in otherwise encrypted form, that's completely different than if you're copying no-passphrase SSH keys over USB sticks. Even if they stay in the same "realm", the mere act of copying has its own orthogonal security considerations that one should pay attention to. – mtraceur Jun 05 '18 at 04:22
10

Basically I'm trying to develop a personal strategy for dealing with key pairs

If you are primarily responsible for the servers in question, then I would strongly suggest you consider looking for configuration management tools like puppet/chef. Both of them have methods for distributing SSH keys out to your machines.

The reasons I started using puppet was specifically because I wanted a good way to quickly revoke a key on my ~60 Linux servers when staff changed.

When you have a configuration management tool in place to manage your keys, then it becomes far easier to have many key pairs. Instead of manually having to revoke a key on every box, or add a new key on every box, you simply add the public key on the configuration master, and the clients will update the authorization lists within whatever polling interval you set for your configuration management tool.

This does mean that you are putting a lot of trust in your configuration management host not being compromised. Because it typically performs tasks with root privileges on every machine you configure it to, you must make sure it is locked-down tight, with lots of auditing. If an attacker was to compromise the configuration master, then it could do anything like add new keys, add new accounts, etc.

Zoredache
  • 633
  • 1
  • 6
  • 14
  • 1
    This is a good tip. Yes, I use Chef for servers that I set up from scratch, however there are many other connections in my day-to-day use... – Andrew Vit Jan 25 '12 at 19:13
0

I'm using one key per realm and per machine. 4 remotes accessed from 2 workstations => 8 private keys. Never copy a private key from a host to another. If one workstation is compromised, revoke all those keys.

As creating keys and configuring SSH to use them is quite tedious, I wrote a tool dedicated to manage my SSH keys for Github access: github-keygen.

dolmen
  • 131
  • 4