6

System Background

  • Sensitive data must be stored (i.e. Credit Card number or SSN) in the system for a notable period of time, to serve ongoing operation of the company.

  • Anyone can insert sensitive data. Only specific users are allowed to retrieve.

  • Precautions taken in securing the application and auxiliary services. (i.e. HTTPS connection, install regular Security Updates, research/secure Authentication/Session Management, back-end SSH, securing the application as a whole against XSS, SQLi, etc, etc)

  • Encouraging secure practices on end-user PCs and Networks of the those Authorized to access sensitive data, but may not be able to enforce from the server-side.

  • The application is designed from the ground-up. (not a retroactive-security upgrade)

Encrypted data isn't always secure

I would think that any security expert would tell you to encrypt sensitive data (i.e. SSN or Credit Card numbers) on the machine, even during production, because even with appropriate precautions and a ground-up secure design, there is always a risk that somehow, the data could be leaked, or the server hacked, or the application exploited.

The problem I see with encryption, is that the server must also have access to the Key. While we can store this outside the SQL Database. Also this can be backed up on paper instead of the usual electronic archives. There are many possible attacks besides SQL-Injection or Backups theft in which case the Key would be stolen as well, defeating the encryption.

I envisioned a solution for Key Management some time ago but only today I am posting this as a question requesting professional feedback.

Solution, requesting feedback.

  1. When an user account is Authorized to access/export sensitive data:

    • Password reset is required, with very high strength checks.

    • An Asymmetric (RSA) Key-pair is generated for the User. The Public key is stored in plain.

    • The Private User Key is encrypted using a Symmetric encryption derived from the user's (strong) password.

      → To access the Private User key for this User, you need to guess the user's password.

    • As usual, the password is saved with BCrypt, quite separately from Key Derivation.

  2. Sensitive data is submit from a less-authorized user.

    • An Asymmetric Key-pair is generated for the Data. The Public Data key is stored in plain.

    • The Private Data key is encrypted using every single Authorized user's Public User key.

    • The sensitive data is encrypted using the Data key.

      → To access the sensitive data, you need to determine the private Data key, which is not stored directly, but can be accessed if you guess the password of an Authorized User.

    • The same Data key is re-used for up to 90 days.
      Older keys are purged when no data is associated with them.

  3. When an Authorized user signs in.

    • The sign-in is verified using BCrypt.

    • A Session Token is generated with at least 72 bits of entropy and passed to the browser.

    • The Private User key is temporarily decrypted (using Symmetric key derived from Password), and then re-encrypted for the current session. (using Symmetric key derived from Session Token)

      → To access the sensitive data, one can either guess a user's password, or steal the Session Token.

    • The server-side only stores a simple SHA-256 hash of the Session Token.

  4. While signed in, the Authorized user's browser will pass the Session Token, which is used to decrypt the User Key, used to decrypt the appropriate Data Key, and then the Sensitive data can be served where appropriate.

So, supposing the server or database were compromised, an attacker would need to do the following to gain access to Sensitive Data.

  • Guess one of the weaker passwords among the small pool of Authorized individuals, or
  • Steal a session key from a machine currently in use by an Authorized individual
  • Or, edit the operational software to begin storing the data in plain text (only possible if the intrusion provides Read+Write access)

It seems to me, this is the most robustly secure implementation possible.

A) Is this recommended?

B) Is this implemented in main-stream Libraries or Applications, or is this a rather unique idea?

700 Software
  • 13,807
  • 3
  • 52
  • 82
  • I'm sorry this is long, I wanted to include all the background. I think this is a good Solution that should be implemented wide-spread, but only now am I asking you guys to review it. (i.e. never roll your own ...) – 700 Software Jul 11 '16 at 18:49
  • Also, does this strike you as a good question? Should it be edited further? – 700 Software Jul 11 '16 at 19:07

1 Answers1

2

Here are some general considerations regarding bigger architecture in case more servers could be used to reduce attack surface. In the above scenario, if the single server is compromised, it may reveal sensitive data over period of time when admin users are logging-in.

  1. There could be two web servers - one for clients and one for admins
  2. There could be database on another server

Clients server inserts sensitive information into database server without ability to read anything.

Now, only admins have permission on the database level to retrieve sensitive information.

The above can be achieved in many ways, for example: - The database encrypts data itself and stores multiple versions based on it's built-in routines, e.g. SQL Server, however this requires a bit coding on SQL Server itself - this makes auditing better if each admin is using it's own credentials to the SQL Server so sysadmin can see what sensitive records were viewed even if the admins web server was compromised.

  • The clients web server uses public key to encrypt data and insert it to SQL Server and there's private key on the admin web server, however, if one of the admins will go rouge and hack the admin webserver, he may read all sensitive data using single database user and decryption key

  • In case clients web server gets compromised, additional attacks can be launched on the SQL Server to retrieve all information. For this reason, additional, in-the middle copy might be used, e.g. the sensitive data is encrypted and stored and published on clients server via API, then admins server contacts the API on clients machine and stores data in SQL Server which removes any access to the SQL Server from the clients server

  • Client and Admin web servers can run on the same physical server if resources are limited, or in cloud in different containers.

  • If the clients web server gets compromised, by installing malicious software, one can read all the sensitive data, therefore, it would be advised to use the PKI built-in into the browser or in javascript, so that the information is encrypted on the client-side, and sent to the server already encrypted using public key. However, if the client server is compromised, the public key might get replaced or even the whole website, which raises another question how the client browser can verify the authenticity of the application and the public key? This is asking for client-side application which would e.g. load the website in e.g. embedded browser (so it's like pre-installed exe file which has browser components built-in), and then verify integrity of that application. I've been doing such apps for some time and here's good reason to do it :-) I am not sure if the web browsers support it, I've been doing executable apps with built-in browser because that's one of the ways of doing cross-platforms, dynamic user interfaces.

Here's something which might be helpful. Subresource Integrity

Anyway, simply monitoring the integrity of the website can be implemented with Nagios or Rundeck by running continuous scan on the website, software and keys on the clients server itself. Another thing is, if the keys are getting replaced, the whole system will cease to work as the admins wont be able to decrypt the data.

It would be great if any given website could be signed for the domain and that signature could be checked against 3rd party. Maybe browser extension would be something good to do this. So if I load the Google.com, the HTML and JavaScript is signed by 3rd party so that if it's replaced, browser would not run it.

This is just slightly off-topic but might get some more ideas how to basically take care of the sensitive, user-submitted data.

Aria
  • 2,706
  • 11
  • 19
  • 1
    I like the point about logging which admin has viewed the data. While this is not relevant to encryption in a single server setup, it is an important consideration during ongoing operations, as well as the multi-server setup you describe. – 700 Software Jul 11 '16 at 19:34
  • Your last bullet point is clever. A browser could encrypt the data prior to transmission over HTTPS. This helps thwart read-write intrusions that would turn on logging, particularly when the primary static files are served *separately* from the Client (sensitive-data-collecting) server. (you don't want the Client server to be able to instruct the browser to bypass the signature check) – 700 Software Jul 11 '16 at 19:39