0

The requirements are to Encrypt the data in our servers 'at rest' and the 'database' - in short the whole infrastructure.

Threat Model - to protect a possible data-breach by an adversary (or from unauthorized entity), from physical theft (or for safe disposal) and for secure duplication (for safe backups).

Problem - We are a 'SaaS' and deals with tons of users requesting data back and forth (from our servers) - therefore, the data which we plan to encrypt should remain highly accessible (user files, messages etc) and encrypted at the same time - without having a Public/Private Keys schema from the User side, which establishes the following notion.

  • Data should get encrypted as soon as it enters into the server (using any symmetric-crypto mechanism) and stays available for the users when they asks for it | A bit of research has shown full-disk encryption mechanisms (e.g. LUKS, lib-crypt, or PGP) which I'm not sure will do the job or not.

  • If full-disk encryption is the answer to our quest then the second issue is having the encryption key (being available on the same server) which will not address our threat model (as for an attacker who got the server access will not take much time to get the encryption-key as well). Secondly, it will require someone physically entering the (crypto) password on every server reboot (which is not possible - since most of our team works remotely and prefers automation).

Precise Concern: Which Encryption scheme will do the job efficiently by staying (robustly) available while covering the mentioned Threat Model - and what will be the preferred system architecture for handling those operations effectively?

The whole point is to protect the data which is being stored on our side in every possible way without compromising the overall performance.

I'll highly appreciate your enlightenment in this regard.

MSalman
  • 1
  • 1
  • 2
    I'm not a sysadmin, but the sysadmins at our company are able to access servers remotely during boot to enter the full disk encryption passwords. – Philipp Jan 05 '16 at 18:11
  • You don't think remote fde is a little insecure? – m2kin2 Jan 06 '16 at 08:02
  • @Philipp - I not certain about that matter (but have been provided with the above stated information) - however, it will surely help if you can double check and provide the way they use to successfully perform updates/patches/reboots etc remotely. – MSalman Jan 06 '16 at 08:36
  • @m2kin2 Well, it depends - though in our threat model it will be beneficial (only if possible at all - in our scenario). – MSalman Jan 06 '16 at 08:38
  • @MSalman That might be more of a question for https://serverfault.com – Philipp Jan 06 '16 at 08:49
  • @Philipp true that - will check it out. – MSalman Jan 06 '16 at 08:55

1 Answers1

3

I am the Security Specialist at a SaaS company and we were looking at the exact same issue earlier last year.

The main issue you will run into is cost. If you want your database to be readily available, but encrypted 'at rest' you basically need a beast of a server that does nothing but handle the encryption/decryption process. As decrypting the data that you need to read is going to give you a performance issue, even with a beast of a box, you are still going to get at least 1 or 2 milliseconds delay added to your queries.

We looked into SafeNet's ProtectDB, but as we are not currently going for a PCI or a HIPPA compliance, we deemed that the ~$50k+ investment cost was not worth the limited benefit.

EDIT: ProtectDB is basically a separate server that houses all the encryption keys, and does a scan of your database to see where the data is. When you are making a Database query, the ProtectDB box receives the traffic instead, and makes an encrypted connection to the database, decrypts the column / table, and returns the data. Thus making it so that your devs/users are not actually directly connecting to the database anymore, instead you are connecting to the ProtectDB box.

Really, you are only getting two benefits: preventing someone from walking into your data center / DR site and stealing the physical box, and that is what security guards and physical locks are there to prevent. Or, if you get a major external network breach, and someone manages to get your database. In both cases, you are really only adding time to how long it will take to break anyway.

I would honestly recommend that you invest that money into a WAF (if you don't already have one), like Incapsula, instead.

Allison Wilson
  • 429
  • 2
  • 9
  • Thank You so much for your detailed response. I've checked SafeNet's ProtectDB (wondering if there are more of such solutions - which are comparatively cheaper and have a credibility to rely on - do let me know you know some). Now - I agree that the benefits against the conceived threat model aren't up to the mark but considering the heat of outrageous online data breaches in the market and other related data privacy issues, we want to add more layers of Security (using Castle's Approach). Also, I'm interested in WAF - and will appreciate further suggestions from your side. Thanking You Again. – MSalman Jan 06 '16 at 10:52
  • The WAF will basically prevent the general external attacker from exploit badly written code. We have some stupidly high number of SQL Injections and CSS vulnerabilities in our code, so the cost to fix all of those on a 6+ year developer salary would be astronomical. We might as well just rewrite the application from scratch. After we put our entire infrastructure behind the WAF, my application vulnerability scanners do not pick up anything. Compared to my personal website that is using the Drupal CMS, that shows 162 various CSS and SQL Injection vulnerabilities. – Allison Wilson Jan 06 '16 at 15:51
  • Thank You again for your detailed answer - which WAF will you suggest though under your experience ? (for a mid-size company having extensive number of clients using their SaaS) – MSalman Jan 07 '16 at 09:56
  • I only have personal experience with the Incapsula WAF (by Imperva) and we have been using it for almost two years now. One of the best features of it (and IMO what they undersell) is their caching of all your static web data. We reduced our bandwidth by about 25 ish percent (considerably more on our peak day each week) as Incapsula is serving up all our static content (each of our clients has their logo on their client portal, not to mention tutorial videos and other static content) rather than that data being transferred from our data center. – Allison Wilson Jan 07 '16 at 14:28
  • It surely sounds good (in many aspects) and will definitely discuss it with my team. Thank You for your suggestion. – MSalman Jan 08 '16 at 10:16