5

I'm familiar with some of the more common ways of configuring a Linux server to be compliant with PCI-DSS 3.2, at least to the requirements of SAQ A. A common concern is requirement 8.5 which requires that:

Generic user IDs and accounts are disabled or removed

This includes the root user, which obviously cannot be disabled, so a "compensating control" (in the terminology of PCI-DSS) is needed. A common recipe is some variant of the following:

  • disable logins as root;
  • require logins by ssh to use an SSH key;
  • use sudo to get root;
  • install pam_loginuid to record the login user ID once users have root; and
  • install and configure auditd to record root actions and the login user ID.

However in case I'm dealing with today, it's not a single machine I'm securing: it's a small cluster (currently with 10 machines), and it's really, really useful to be able to ssh (and scp files) between the machines. Having to do that as a non-root user would be a real pain: almost always the file you need is only readable to root and needs to be put some place only root can write to.

What I'd like to do is allow ssh as root between the machines, using a SSH key present on the servers. This is easy enough in /etc/ssh/sshd_config with a PermitRootLogin command in a Match Address block. I'm not too concerned about the security implications of allowing someone who has compromised one machine to gain control of the whole cluster: the machines are similar enough that if they manage to compromise one, they can probably use the same process to access the rest.

However if I do this, I loose the ability to track who is running what command, as the no login UID is no longer attached to the process when I ssh to another machine. A compensating control in PCI-DSS needs to "meet the intent and rigor of the original PCI DSS requirement", and the intent of requirement 8.5 is stated as being to make it possible "to trace system access and activities to an individual". Without preserving the login UID, we're no longer providing a compensating control for allowing the root user to exist.

What I'm hoping to find is way of passing loginuid from server to server when login in as root, probably by putting it into the environment. I don't mind that this requires implicitly trusting the originating server: it already is. Can anyone suggest a means of doing this? Or failing that, another way of tracing sysadmin activity as root to a particular user, while allowing easy ssh and scp between machines?

richard
  • 151
  • 3
  • 1
    What exactly would you copy from host to host? Using puppet, chef or ansible to do the deed could provide the "paper trail" you need. – fuero Oct 12 '16 at 23:35
  • Tools like puppet will certainly be used for deployment and configuration. But when something's broken there's no substitute to looking around directly; nor can every problem can be investigated on the dev cluster which doesn't get the same real-world traffic. In this field, that often means writing short debugging scripts and copying them to run simultaneously on machines. Puppet's not a good tool for that. But there's also a non-technical answer: the client has said that given the choice between allowing `ssh` as root and PCI-DSS compliance, they'll choose the former. – richard Oct 12 '16 at 23:49
  • 2
    Can you get the machines out of the cardholder data environment? And as for copying a script to 10 machines at once, puppet might not be very good at this, but ansible is, it can even sudo for you, and it requires no infrastructure beyond ssh itself. – Michael Hampton Oct 13 '16 at 02:18
  • I'm happy with the physical security of the machines (and if data centre's security were to be breached, there are far more interesting things to steal). I should also add that cardholder data is not present, and there is no current legal requirement to comply with PCI-DSS; however other sensitive data is present, and that parties concerned agree that it is appropriate to comply with PCI-DSS at least to SAQ-A. seems appropriate. I confess to being less familiar with ansible, but my immediate view is that *any* solution using a central control server is inappropriate. – richard Oct 13 '16 at 10:02

0 Answers0