3

Consider a network for a web-application with 1 webserver and 1 MySQL database, both on physically different servers and both running Linux. The database stores mission critical data and this data is and has to be manipulated by the web-server.

Now, obviously, we want to make sure that only authorised changes are made to this database. This is largely implemented in the application and a part is handled by creating database users with limited amounts of authorisations.

There do remain gaps with this approach, however. For instance: the system administrator is not authorised to make changes to the database independently. Nor would any engineers with deployment rights. But, they obviously would be able to do so by grabbing the database credentials from the web-server and logging in from there.

Any changes made this way would basically be anonymous as they would be obscured by many other changes which would make it hard to detect.

It's (probably) not feasible or wise to implement something that will actually 100% prevent the sysadmin from being able to make changes this way. Sysadmin and others should be able to make data and schema changes using their own accounts as the application is actively developed. However, they shouldn't be able to do so without anyone noticing.

Here are some possible strategies to prevent insiders from changing data anonymously:

  • Implement audit logging on the system to report any programs executed. This would find any direct calls to mysql cli and maybe some sketchy looking scripts. Any successfully performed changes would still be hard to find as scripting would obscure the actual performed changes.

  • Use client-certificates on the database connection with pass-phrases on the private key. This means a sysadmin is no longer able to reboot a machine without the presence of an application manager. Other controls are probably needed to protect keys in memory, sysadm being root and all.

  • Implement audit logging in the database. The web-application causes a significant amount of database traffic, so any malicious changes with the application database user is unlikely to be detected. This means monitoring can only feasibly implemented for interactive database users.

  • Use IPTables to lock traffic to the database to the application user. Would at least create an audit-trail of sysadmin becoming the application user (sudo) or changing the firewall to allow traffic out. Not foolproof, since any changes made using application database user would still be hard to find.

  • Implement SELinux to limit database traffic (packet labeling through SECMARK) to the application domain and further securing the deployment and build chains to prevent/catch unauthorised changes. If sysadmin does need network access to database, this may be audited through auditallow but administrative access should be through another user which is audited. Any system changes or SELinux changes would be alerted upon. This is obviously a high impact change with a lot of extra requirements that requires a lot of knowledge.

Beyond just trusting the sysadmin, how would you approach this? Would the above things work? What other things could one do to make sure a system administrator or similarly privileged user can't access the database unnoticed?

NSSec
  • 459
  • 2
  • 5

1 Answers1

6

As with any security related matter it boils down to risk and cost of mitigation. Below are some additional measures you can take, but some will be quite an investment.

This is actually a common problem. Some things you might do to reduce the risk is implementing sessions recorders like Centrify which also limit an administrators access and can re-play what he did. Technical accounts should never be used by administrator unless through a system which established accountability like CyberArk.

Access should be restricted and all logons should be reviewed and audited. Passwords or keys used for database connection should be limited. E.g. only allow the user with the technical account for the database server to connect from the DB's IP. Preferably these are set by the security officer and cannot be viewed by the administrator when inputting it using a two-eye principle.

The most important part here is to make sure people administering the servers are not the same people as the people administering the log collection and analysis servers.

There has to be some level of trust and you can implement several controls to prevent fraud, but in the end it's still these people's job to make the system work. Hampering them is also not beneficial for the adoption of new security measures in place. Detection is still easier to do than prevention and a bit more fool proof because you will always have a trail somewhere. You just need to make sure you can guarantee your log integrity by logging remotely.

Lucas Kauffman
  • 54,169
  • 17
  • 112
  • 196
  • +1 for "There has to be some level of trust and you can implement several controls to prevent fraud, but in the end it's still these people's job to make the system work.", good logging is the way to go imho (having experience as a SysAdmin of a sizable e-commerce system), it also helps detect and rectify human errors. – Selenog Oct 13 '15 at 10:18
  • 1
    Hadn't considered a full session log. You can also implement that using 'rootsh' on each machine. But that just means you can determine something occured at some time during incident response: you cannot prevent it from happening or perform (targetted) monitoring. – NSSec Oct 13 '15 at 10:59
  • @NSSec actually Centrify for instance allows you to automatically flag sessions using certain commands or accessing certain files. The security officer then receives an email including a link with what the user is doing. I'm not a vendor or affiliated with the product btw :P I just saw it working once and thought it was pretty cool. – Lucas Kauffman Oct 13 '15 at 11:02
  • See my point on system audit logging (which kinda provides the same thing, but host based). If the attacker uses (innocent looking) scripts to perform database modifications from the webserver, even Centrify wouldn't catch that, I assume. This lead me down the SELinux idea, but I guess that is kinda fragile from a system administration POV. I understand the cost of mitigation argument, but let's put that aside for a while and just look at what *could* be done :) – NSSec Oct 13 '15 at 11:39
  • 2
    So why don't you just not allow the use of unreviewed scripts? You can just put the scripts in a folder with read/execute flag so they can use them but not modify, and you restrict the access to a handful of commands for day-to-day operations. – Lucas Kauffman Oct 13 '15 at 11:52