0

Like so many others, I'm an intermediate web developer who is starting to get into the security side of things and I'm looking to start running a Linux VPS (Debian). For years, I've resisted the move to VPSs because of the security implications. There are many, many guides available on the internet and even questions on this StackExchange.

However, I'm still uncertain if what these guides recommend is sufficient. To be clear, here is what I consider as "standard advice":

  1. Regular system updates
  2. Regular backups
  3. Disabling root access, create limited user account (with sudo privileges)
  4. Harden SSH access (use key files, disable root logins, change port, listen on only one inet protocol
  5. Use Fail2Ban
  6. Remove unused network-facing services (ex: samba, lp, Xserver)
  7. Configure a firewall
  8. Configure an intrusion detection system (ex: OSSEC, Tripwire)
  9. Run regular malware checks (maldet, ClamAV, rkhunter)
  10. Disable IPv6 (I'm not sure about this one)
  11. Make /boot read-only (I'm not sure about this one either)

For further information, my use case is the following: a server that holds very sensitive personal information and messages and would be a potentially desirable target for attacks. In this case, is the "standard advice" enough?

P.S. I'm aware that the best thing to do would probably be to hire a security expert, but I'm working pro-bono for a non-profit and they don't have the budget for it.

Conor Mancone
  • 29,899
  • 13
  • 91
  • 96
daveslab
  • 141
  • 5
  • Unfortunately this question doesn't necessarily have an answer. "Is it enough" is very much context dependent. The security necessary for storing nuclear launch codes is different than that for an anonymous cute-cat-picture voting site. Your needs probably lie somewhere in the middle. Certainly, if you follow the above list (where applicable), you'll probably be doing better than most. You'll also have to be cognizant of potentially application-level vulnerabilities in whatever services this server is hosting. – Conor Mancone Mar 25 '20 at 18:03
  • Personally I've never bothered with #5 or, for linux, #9. #8 would be nice but I consider that a secondary concern in most cases. I'm dubious about #10. – Conor Mancone Mar 25 '20 at 18:05
  • @ConorMancone is simply advice for the 2000s when many security tools did not have IPv6 capabilities. Now that most tools can deal with IPv6, it's outdated. –  Mar 25 '20 at 18:07
  • More insight into software usage is required. – Aayush Mar 25 '20 at 19:40

2 Answers2

2

For further information, my use case is the following: a server that holds very sensitive personal information and messages and would be a potentially desirable target for attacks. In this case, is the "standard advice" enough?

I would rather say it's the minimum. But some of the advice is questionable though. Since you mention 'very sensitive personal information' the bar has to be even higher than that.

If you have very sensitive personal information, you need to think about the potential damage and also your legal liability. Working pro bono does not mean you cannot get sued if things go wrong. Even if you escape legal repercussions your reputation could take a hit. If you are in the EU or under EU jurisdiction, GDPR applies and there are steep fines for failure to protect personal data. Other countries/states may have similar provisions.

Maybe it would be a good idea to request a professional penetration test to validate your setup. If the non-profit you are looking for doesn't have the budget now, will they have the budget to handle the fallout when things go wrong ?

I am a firm believer in Murphy's law: Anything that can go wrong will go wrong.

Even if your VPS is extremely well-secured, The webhost could get hacked. If a hacker gains access to the hypervisor systems, he will have access to the VPSes. Someone who is after valuable data is going to find workarounds, find a weak spot somewhere. If you have a domain name, the hacker could hijack it by taking over your registrar account. So you always depend on third party providers.

Probably you are going to be hosting some application along with the data. Do you know the inside out of it ? Maybe it is full of bugs and riddled with vulnerabilities like SQL injections. Then even a well-secured server will not help. The application has to be evaluated and audited as well.

Advice: think twice before accepting an assignment where you have nothing to gain and much to lose, especially if you think you lack the practical experience. This 'client' does not look like one that can be used as a guinea pig or training platform. You have no obligation to take on this job, and you don't want to be the lonely guy taking all the blame.

Kate
  • 6,967
  • 20
  • 23
1

The answer is simple: no. You can do all sorts of things on your server to make it more secure, give less access et cetera, but at some point in time, there will be vulnerabilities. Regular system updates will protect you against them if you apply them very regular, but even then, they will always be a bit behind.

It means that you will need to create an environment where the server itself is less exposed. If the server holds holds very sensitive personal information, you would be a fool if you expose this directly to the Internet. If you put your server behind a dedicated firewall and make it accessible via a VPN tunnel with two factor authentication on the firewall, then your server is a lot safer. But it depends on your use.

The point is, that just throwing standard security measures at a server without understanding the risks that you run, creates a lot of work which may be partially pointless. You need to understand how the server us used, how it should be accessed and what are the risks that it is accessed wrongly.

And some of the advice is a bit outdated or open for discussion. For example: changing the ssh-port is not really a security measure anymore. And instead of key-files, two factor authentication may be sufficient too. Oh, and why should anyone log into the system (except for emergencies) other that via the application anyway?

What is missing from the list is monitoring and follow-up. Read the access-logs. an IDS is nice, but if you don't follow-up on the alerts, it is pointless.

And then the application. That is used to access the data. If your application has vulnerabilities, then all your OS-level measures are of little use.

So do a threat analysis. Understand why you want to take certain measures, which threat they mittigated. Take measures in you network too. Create an environment where you don't have to log in every time (central log collection, automatic patching etc.)

Ljm Dullaart
  • 1,897
  • 4
  • 11
  • Thanks for the extra advice, you make some good points and I'll certainly take them into account. Indeed, I was thinking of putting the whole thing behind a pfSense, with the application on a Proxmox box with OpenVPN. I've already done that before and it seems much safer. – daveslab Mar 26 '20 at 20:38