0

I have a machine with a process that signs code for users running 24/7 (encrypting the key is not possible), which will be protected as well as possible. Is there a way to set things up such that I can detect if the private key has been compromised?

The most obvious sign will be a breach in general, but does this mean that I should automatically assume the private key has been compromised no matter what? I hope this doesn't sound like a stupid question. Revoked keys will cause a significant problem, especially if I can't determine the exact time the key was compromised. Tools like FIM can be used to detect modifications, and an auditing solution to track file access. But it seems to me once there's been a breach, no matter how tiny, all bets are off.

user8897013
  • 123
  • 4
  • Define what you mean by "compromised"? Do you mean if someone gets unauthorised access to it? – schroeder Mar 21 '19 at 08:06
  • Yes. Particularly in the case that they gain access to the private key and do not make their presence/actions known (or attempt to make it unknown). – user8897013 Mar 21 '19 at 09:03
  • 3
    I'm not sure that's possible. If I copy digital data, what evidence is possible to collect in order to detect the act of copying? The only thing you could realistically detect is if the private key was *used*: you could run anomaly detection on the use of keys (different client/IP/etc.). – schroeder Mar 21 '19 at 09:24
  • I'm thinking of a setup with a convoluted set of users which do and do not have access to the directory. The process would need access, but if that is run with a special user (maybe just www-data or whatever), I can compare audit logs containing file access times (hopefully read for copying) versus when the process was active (using the private key for signing) notice mismatched times if present. Beyond the process, only root should have access to that directory/file, and with only one root admin all other sudoers (other employees) would need to be restricted. Just some thoughts. – user8897013 Mar 21 '19 at 09:39
  • 2
    What you described has nothing to do with keys, but simple access control and logging. Once you look at it this way, there are tons of developed models to use. – schroeder Mar 21 '19 at 12:24
  • @schroeder can you recommend a few I should be considering? (for a read-only scenario) – user8897013 Mar 21 '19 at 12:30
  • @schroeder I think this differs **a little** from other types of sensitive information: in the case of PII of PHI, there's not much that can be done; you wouldn't change your name and requesting a new SSN is seldom done. For regular passwords and the like, rotating them would be time consuming but should not be too expensive. In the case of code signing, the certificates can be renewed, but the shear cost of these certificates (code signing in particular) and the pain of revocation and all it entails for those with signed code offers strong incentive not to unless the compromise is obvious. – user8897013 Mar 21 '19 at 12:37
  • @schroeder correction to the above: if PII/PHI were compromised, you would be obligated to inform those users which a company would greatly like to avoid... so in this sense the compromise of PII/PHI could also be very costly. – user8897013 Mar 21 '19 at 13:02
  • 2
    The strength of the potential impact does not change the base concept that what you are asking for is simply access control and logging. The strength of the potential impact simply means that you need to have more options and resources for response when you detect an anomaly. The underlying concepts, though, are the same. – schroeder Mar 21 '19 at 13:41

2 Answers2

4

You need a Hardware Security Module (HSM), as there's no reliable way to detect if someone else have your key. It's the same issue of knowing if someone have your password: you can only know if someone else uses the password on a place you can monitor.

With an HSM, the key never leaves the secure environment, your signing code can run inside of it, and its security provisions can erase the keys in an event of tampering.

ThoriumBR
  • 50,648
  • 13
  • 127
  • 142
  • How standard is it for companies to do code-signing without an HSM? I imagine a lot of companies may be limited to it use by cost. Is not using an HSM while code-signing considered "dis-reputable" in any way? – user8897013 Mar 21 '19 at 19:25
  • I have no idea. I know of some large companies that employ HSM, but not small ones... – ThoriumBR Mar 21 '19 at 19:32
  • @user8897013 There are all kinds of pseudo-HSM solutions e.g. software "HSM"/vault systems, or having a dedicated system that only does signing and doesn't allow kay material out of it, and append-only logs all access and usage; but that doesn't get any cheaper than a budget HSM like https://www.yubico.com/product/yubihsm-2/ or something. – Peteris Mar 22 '19 at 00:25
  • @Peteris How would something like the yubico work with a cloud VPS? Would I need physical access? I see AWS offers HSM services, but even their cheapest rate is near $1000/mo. – user8897013 Apr 01 '19 at 08:01
3

You can not detect or verify a negative (non-compromise). As soon as you encounter any indication of compromise you can safely assume the key is compromised, not the other way around.

I've seen attempts to do similar things by checking file access times and comparing it to the last known good usage. However, it was wonky at best and falls apart completely if an adversary knows where that last good usage information is stored. Furthermore, it does not catch anything that reads the key out of memory.

Perhaps separating the visible service (signing of code) and the keys into separate machines might help. If the machine providing the visible service gets compromised, your keys are still secure. Of course, the keys must never leave the keyserver in such a scenario. All data required for creating the actual signature need to be passed over via a secure connection. In the best case, you just send the hashes to the keyserver to receive a certificate back.

fleitner
  • 129
  • 5