48

I work with small teams (<10) of developers and admins with the following characteristics:

  • Most members of the team have >1 personal computer, most of which are portable
  • Team members have access to 10-50 servers, usually with sudo

I think this is pretty typical for most startups and small- to mid-size corporate IT groups.

What are the best practices for managing SSH keys in a team like this?

Should you share a single key for the whole team?

Should each person have their own key on a shared account ("ubuntu" on every server)?

Separate accounts?

Should each team member keep a separate key for each of their laptop or desktop computers?

Evan Prodromou
  • 757
  • 1
  • 6
  • 9

8 Answers8

35

At my company we use LDAP to have a consistent set of accounts across all of the machines and then use a configuration management tool (in our case currently cfengine) to distribute authorized_keys files for each user across all of the servers. The key files themselves are kept (along with other system configuration information) in a git repository so we can see when keys come and go. cfengine also distributes a sudoers file that controls who has access to run what as root on each host, using the users and groups from the LDAP directory.

Password authentication is completely disabled on our production servers, so SSH key auth is mandatory. Policy encourages using a separate key for each laptop/desktop/whatever and using a passphrase on all keys to reduce the impact of a lost/stolen laptop.

We also have a bastion host that is used to access hosts on the production network, allowing us to have very restrictive firewall rules around that network. Most engineers have some special SSH config to make this transparent:

Host prod-*.example.com
     User jsmith
     ForwardAgent yes
     ProxyCommand ssh -q bastion.example.com "nc %h %p"

Adding a new key or removing an old one requires a bit of ceremony in this setup. I'd argue that for adding a new key it's desirable for it to be an operation that leaves an audit trail and is visible to everyone. However, due to the overhead involved I think people sometimes neglect to remove an old key when it is no longer needed and we have no real way to track that except to clean up when an employee leaves the company. It also creates some additional friction when on-boarding a new engineer, since they need to generate a new key and have it pushed out to all hosts before they can be completely productive.

However the biggest benefit is having a separate username for each user, which makes it easy to do more granular access control when we need it and gives each user an identity that shows up in audit logs, which can be really useful when trying to track a production issue back to a sysadmin action.

It is bothersome under this setup to have automated systems that take action against production hosts, since their "well-known" SSH keys can serve as an alternative access path. So far we've just made the user accounts for these automated systems have only the minimal access they need to do their jobs and accepted that a malicious user (who must already be an engineer with production access) could also do those same actions semi-anonymously using the application's key.

Martin Atkins
  • 2,188
  • 18
  • 19
  • This is a really good answer! Follow-up question: you use cfengine to distribute .authorized_keys. Did you look into any of the ways to store SSH public keys in your LDAP server directly? They mostly require patching sshd, which feels fragile. – Evan Prodromou Jan 23 '13 at 18:55
  • I've done something similar with Chef. – gWaldo Jan 23 '13 at 20:09
  • 1
    @EvanProdromou I've set up SSH public keys in LDAP, but it was far more hassle than it was worth, given that I had to maintain up to date SSH packages myself, and it was around the time of a few SSH vulnerabilities – Daniel Lawson Jan 23 '13 at 20:25
  • 1
    Sudo rules and SSH keys may be placed in LDAP as well, SSSD can be used to set this up. Links: http://www.sudo.ws/sudoers.ldap.man.html and https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/openssh-sssd.html – fuero Jan 24 '13 at 16:34
  • I like your `ProxyCommand ssh -q` bit there! Never seen that. I'm hesitant to put up a bastion server but if it can be transparent to the end users then I might be all for it. Thanks Martin! – Randy L Aug 07 '16 at 18:58
  • How do you protect github repository itself? – Lucas Pelegrino Aug 31 '16 at 23:15
6

Personally I like the idea of each member of staff having one key on a dedicated ssh bastion machine on which they have a basic user account. That user account has 1 ssh key which grants access to all the servers they need to use. ( these other servers should also be firewalled off so only ssh access from the bastion machine is enabled )

Then on their everyday work machines, laptops, tablets etc they can make their own choice of having one key between them or multiple keys.

As a systems admin on that network you have a minimum number of keys to look after (one per dev), can easily monitor ssh access through the network ( as it all routes through the bastion machine ) and if the dev's want multiple keys or just one they share amongst their machines it's no real issue as you only have one machine to update. ( unless the bastions ssh keys are compromised, ut tbh that is far more unlikely than one of the users keys)

peteches
  • 413
  • 3
  • 8
5

I've had the situation where I've needed to provide SSH key access for a team of 40 developers to ~120 remote customer servers.

I controlled access by forcing the developers to connect through a single "jump host". From that host, I generated private/public keys and pushed them out to the customer servers. If the developers needed access from a laptop, they could use the same keypair on their local system.

ewwhite
  • 194,921
  • 91
  • 434
  • 799
2

I personally would go with per user then you instantly have accountability and set restrictions far more easily - I don't know what other people think?

Rhys Evans
  • 919
  • 8
  • 23
2

One approach I've heard of, but not used myself, is for each user to have a package (eg, .deb, .rpm) which contains their ssh public key config, as well as any dotfiles they like to customise (.bashrc, .profile, .vimrc etc). This is signed and stored in a company repository. This package could also be responsible for creating the user account, or it could supplement something else creating the account (cfengine/puppet etc) or a central auth system like LDAP.

These packages are then installed onto hosts via whatever mechanism you prefer (cfengine/puppet etc, cron job). One approach is to have a metapackage which has a dependency on the per-user packages.

If you want to remove a public key, but not the user, then the per-user package is updated. If you want to remove a user, you remove the package.

If you have heterogeneous systems and have to maintain both .rpm and .deb files, then I can see this being a bit annoying, although tools like alien might make that somewhat easier.

Like I say, I've not done this myself. The benefit in this approach to me is that it supplements a central LDAP system and central management of user accounts, in that it allows a user to easily update his package to include his .vimrc file, for example, without having to have that file managed by tools like puppet, which a user may not have access to.

Daniel Lawson
  • 5,426
  • 21
  • 27
1

You should take the safer route, and force each user to have separate keys, and probably for each device.

If you have keys shared between users---even if you are a small team---revoking a key is an inconvenience to everyone else.

If you allow your staff to have one key for all their devices, then they can pick and choose which servers they can connect to with that device. For instance, a mobile laptop could be restricted to one or two servers, but the desktop at the office can have access to all the servers.

Dolan Antenucci
  • 329
  • 1
  • 4
  • 16
1

A few years ago Envy Labs wrote a tool called Keymaster (partnered up with the client Gatekeeper) to handle something like this for development teams.

The project hasn't had much love over the last two years, but it is likely something you could tinker with, and perhaps bring back to life?

The repo is available in github: https://github.com/envylabs/keymaster

Matthew Savage
  • 528
  • 1
  • 7
  • 18
-4

An aproach could be setting up NIS server with /home share over NFS. That combinened with sudo in the servers and allowing only users you want to each server via ssh configuration.

That way you every member of the team use only one user and its key to access all the servers. Sudo and one password for the admin tasks.

Regards,

Rafael

  • 1
    1) Enabling NIS is a security hole. 2) Having one password and account is contrary to traceability and accountability. – Deer Hunter Jan 23 '13 at 17:36