76

Assumptions:

  • Normal LAMP Web-server running web app. (Eg. AWS EC2+Apache2+MySQL+Php7)
  • Not directly targeted by some super-hacker or governmental organisation etc.
  • Related to point above, no social engineering and the web app itself is secure.

Targeted by whom?

Automated scans and exploits. Are there others?

Is running apt-get update && apt-get upgrade every so often enough to keep a web-server secure?


If no

What else should the 'average' web app programmer, who is also taking care of the server, do to keep the web server reasonably secure for a startup company.

It depends...

Yes, it always depends on many things. Please include assumptions for the most common cases (pareto principle) that the common web app programmer may or may not be aware of.

MPS
  • 911
  • 1
  • 7
  • 12
  • 1
    is it homework? – Purefan Feb 20 '17 at 08:32
  • 12
    @Purefan No, it isn't. I tried to frame the question in an way to receive answers are wide enough to help me as well as other mainly programmers who similarly also have to take care of the server. – MPS Feb 20 '17 at 08:43
  • 14
    Have you considered unattended-upgrades? – Calimo Feb 20 '17 at 09:18
  • Ubuntu can automatically run it's updates with fairly little trouble. It would almost certainly install them before you ever did. – trognanders Feb 20 '17 at 09:19
  • @Calimo No I hadn't. Thank you for pointing me to that package. After reading [this related question](http://askubuntu.com/questions/9/how-do-i-enable-automatic-updates) a short followup: Is the risk of these unattended upgrades running into errors and stopping the web-app negligible? – MPS Feb 20 '17 at 09:32
  • 2
    @MPS that never happened to me, everything always restarted, and I've been using it on several servers for many years. But it will certainly shut down your application while updating. Unless instructed otherwise it will stop mysql but not apache, and only you knows what happens next. – Calimo Feb 20 '17 at 09:42
  • @Calimo Ok, thanks for the heads-up. I will investigate that option a bit further. – MPS Feb 20 '17 at 09:47
  • 7
    Does it run a crappy PHP app? Most web server breaches I've seen are from the apps you run on them, not the server software or the OS itself. – André Borie Feb 20 '17 at 13:30
  • This question isn't framed very well because you're asking how to "keep a web server secure," which is too broad for a Q&A site and impossible to answer in the space provided. You should ask a more narrow question, like "how should I do patch management on Debian Linux?" If that is your question, then keep in mind that apt-get update will download a patched kernel, but you have to reboot to actually run the patched kernel. So rebooting/planned downtime is an important factor in your patch management. – Mark E. Haase Feb 20 '17 at 18:17
  • 1
    @AndréBorie The OP specifically excluded the app from consideration, "and the web app itself is secure". Only the OS and server environment and configuration relates to the Q. Yes, PHP apps can be a huge hole in an otherwise secure setup, but OP isn't asking that. Keeping the environment secure, excluding the app, is the OP's target –  Feb 20 '17 at 21:54
  • 3
    You mention LAMP. Are the components of LAMP managed by apt? – Jonas Schäfer Feb 21 '17 at 08:27
  • @JonasWielicki Well noticed that they might not. In my case they are. Others will have to think of that too. Thank you. – MPS Feb 21 '17 at 08:58
  • @MPS In my experience, 99% of compromises are directly exploiting flaws in the app, not the web server packages. I have been using unattended-upgrades for years without issue (aside from small /boot filling up with kernels) and recommend it. – cscracker Feb 21 '17 at 15:51
  • Depends on the Linux Distribution and Repository you add. For example with Ubuntu LTS only a limited number of packages are under maintenance - for some time. Sooner or later you need to Diät upgrade ad you should follow security advisories closely, some recommend manual actions. – eckes Feb 21 '17 at 23:25
  • You need to make sure that components restart or otherwise reload so they actually use the updated versions. – OrangeDog Feb 22 '17 at 16:31

8 Answers8

77

You've removed a lot of problems that normally get you in trouble (namely, assuming that the app you're hosting is completely secure). From a practical perspective, you absolutely have to consider those.

But presumably since you're aware of them, you have some protective measures in place. Let's talk about the rest, then.

As a start, you probably shouldn't run an update "every so often". Most distros operate security announcement mailing lists, and as soon as a vulnerability is announced there, it's rather public (well, it often is before that, but in your situation you can't really monitor all the security lists in the world). These are low-traffic lists, so you should really subscribe to your distro's and upgrade when you get notifications from it.

Often, a casually-maintained server can be brute-forced or dictionary attacked over a long period of time, since the maintainer isn't really looking for the signs. It's a good idea then to apply the usual counter-measures - no ssh password authentication, fail2ban on ssh and apache - and ideally to set up monitoring alerts when suspicious activity occurs. If that's out of your maintenance (time) budget, make a habit of logging in regularly to check those things manually.

While not traditionally thought of as a part of security, you want to make sure you can bring up a new server quickly. This means server configuration scripts (tools like Ansible, Chef, etc. are useful in system administration anyways) and an automatic backup system that you've tested. If your server's been breached, you've got to assume it's compromised forever and just wipe it, and that sucks if you haven't been taking regular backups of your data.

Xiong Chiamiov
  • 9,384
  • 2
  • 34
  • 76
  • Thank you for your answer. There seems to be a lot of email lists. For example for [Ubuntu Server](https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce) . Are these the email lists you are talking about? – MPS Feb 20 '17 at 08:55
  • 26
    +1 for considering a breached server a brick. If you don't have a good backup plan, and use it, you better have a good résumé. –  Feb 20 '17 at 10:31
  • @MPS Yep, that's the one (for Ubuntu). – Xiong Chiamiov Feb 20 '17 at 18:37
  • 2
    Besides fail2ban, I find a properly configured [logcheck](http://www.logcheck.org/) to be invaluable in monitoring a large number of systems. With it, you set up whitelists of normal activity, and anything outside of normal which shows up in the (configurable set of) system log files is automatically e-mailed to a recipient of your choice (ideally quickly ending up on a different, minimal, secure system). It's not *perfect*, but it goes a *very long way* toward catching unusual stuff long before that unusual stuff becomes a problem. – user Feb 21 '17 at 07:43
  • I would go further and follow the philosophy of Unix...minimum services. As such, no place for exposing SSH services to the Internet of large. Try to scan for port 22 any of the google servers, for instance.. – Rui F Ribeiro Feb 21 '17 at 07:47
  • 1
    With debians, I like to have apticron running on the server. It sends you emails when new updates are available for any installed package. This helps in cases where packages get security updates you might not be aware of that you even have them installed. Combine with apt-listchanges and apt-listbugs to avoid nasty surprises (even though it is rare with Debian stable that either apt-list{bugs,changes} outputs anything). – Jonas Schäfer Feb 21 '17 at 08:25
  • "While not traditionally thought of as a part of security, you want to make sure you can bring up a new server quickly." - I'd say this is a matter of [Availability](https://en.wikipedia.org/wiki/Information_security#Availability), whilst modern tools make it much easier to implement. – Carrosive Feb 21 '17 at 10:19
27

No. This is not enough to keep you secure.

It'll probably keep you secure some time but security is complex and quick paced so your approach really isn't good enough for long-term security. If everybody made the same assumptions as you're making in your question, the internet would be one big botnet by now.

So no, let's not limit this question to packages. Let's look at server security holistically so anybody reading this gets an idea of how many moving pieces there really are.

  • APT (eg Ubuntu's repos) only covers a portion of your software stack. If you're using (eg) Wordpress or another popular PHP library and that isn't repo-controlled, you need to update that too. The bigger frameworks have mechanisms to automate this but make sure you're taking backups and monitor service status because they don't always go well.

  • You wrote it all yourself so you think you're safe from the script kiddies? There are automated SQL injection and XSS exploit bot running around, poking every querystring and form alike.

    This is actually one of the places where a good framework helps protect against inadequate programmers who don't appreciate nuances of these attacks. Having a competent programmer audit the code also help allay fears here.

  • Does PHP (or Python, or whatever you're running) really need to be able to write everywhere? Harden your configuration and you'll mitigate against many attacks. Ideally the only places a webapp is able to write are a database, and places where scripting will never be executed (eg a nginx rule that only allows serving static files).

    The PHP defaults (at least how people use them) allow PHP to read and write PHP anywhere in the webroot. That has serious implications if your website is exploited.

    Note: if you do block off write access, things like WordPress won't be able to automagically update themselves. Look to tools like wp-cli and get them to run on a scheduled basis.

  • And your update schedule is actively harmful. What on earth is "every so often"? Critical remote security bugs have a short half-life but there's already a delay between 0-day and patch availability, and some exploits are also reversed engineered from patches (to catch the slow-pokes).

    If you're only applying updates once a month, there's a very strong possibility you'll be running exploitable software in the wild. TL;DR: Use automatic updates.

  • Versions of distributions don't last forever. If you were sensible and picked a LTS version of Ubuntu, you've got 5 years from initial release. Two more LTS versions will come out within that time and that gives you options.

    If you were on a "NEWER IS BETTER" rampage and went with 16.10 when you set your server up, you've got 9 months. Yeah. Then you have to upgrade through 17.04, 17.10 before being able to relax on 18.04 LTS.

    If your version of Ubuntu lapses, you can dist-upgrade all day long, you're not getting any security upgrades though.

  • And the LAMP stack itself isn't the only attack vector to a standard web server.

    • You need to harden your SSH configuration: only use SSH keys, disable passwords, shunt the port around, disable root logins, monitor brute attempts and block them with fail2ban.
    • Firewall off any other services with ufw (et alii).
    • Never expose the database (unless you need to, and then lock down the incoming IP in the firewall).
    • Don't leave random PHP scripts installed or you will forget them and they will get hacked.
  • There's no monitoring in your description. You're blind. If something does get on there, and start pumping out spam, infecting you webpages, etc, how can you tell something bad happened? Process monitoring. Scheduled file comparison against git (make sure it's read-only access from the server).

  • Consider the security (physical and remote) of your ISP. Are the dime-a-dozen "hosts" (aka CPanel pirates) —sqwanching out $2/month unlimited hosting plans— investing the same resources in security as a dedicated server facility? Ask around and investigate the history of breaches.

    Note: A publicised breach isn't necessarily a bad thing. Tiny hosts tend not to have any record and when things are broken into, there aren't the public "post-mortems" that many reputable hosts and services perform.

  • And then there's you. The security of the computer you code all this stuff on is almost as important as the server. If you use the same passwords, you're a liability. Secure your SSH keys with a physical FIDO-UF2 key.

I've been doing devops for ~15 years and it is something you can learn on the job, but it really only takes one breach —one teenager, one bot— to ruin an entire server and cause weeks of work disinfecting work product.

Just being conscious about what's running and what is exposed, helps you make better decisions about what you're doing. I just hope this helps somebody start the process of auditing their server.

But if you —the everyman average web app programmer— are unwilling to dig into this sort of stuff, should you even be running a server? That's a serious question. I'm not going to tell you you absolutely shouldn't, but what happens to you when you ignore all this, your server is hacked, your client loses money and you expose personal customer information (eg billing data) and you're sued? Are you insured for that level of loss and liability exposure?

But yeah, this is why managed services cost so much more than dumb servers.


On the virtue of backups...

A full system backup is possibly the worst thing you could keep around —for security— because you'll be tempted to use it if you get hacked. Their only place is recovering from a hardware failure.

The problem with using them in hacks is you reset to an even earlier point in time. Yet more flaws in your stack are apparent now, even more exploits exist for the hole that got you. If you put that server back online, you could be hacked instantly. You could firewall off incoming traffic and do a package upgrade and that might help you, but at this point you still don't know what got you, or when it got you. You're basing all your assumptions off a symptom you saw (ad injection on your pages, spam being bounced in your mailq). The hack could have been months before that.

They're obviously better than nothing, and fine in the case of a disk dying, but again, they're rubbish for security.

Good backups are recipes

You want something —just a plain-language document or something technical like an Ansible/Puppet/Chef routine— that can guide somebody through to restoring the entire site to a brand new server. Things to consider:

  • A list of packages to install
  • A list of configuration changes to make
  • How to restore the website source from version control.
  • How to restore the database dump*, and any other static files you might not version-control.

The more verbose you can be here, the better because this also serves as a personal backup. My clients know that if I die, they have a tested plan to restore their sites onto hardware they control directly.

A good scripted restore should take no more than 5 minutes. So even the time-delta between a scripted restore and restoring a disk image is minimal.

* Note: database dumps must be checked too. Make sure that there aren't any new admin users in your system, or random script blocks. This is as important as checking the source files or you'll just be hacked again.

Oli
  • 1,121
  • 9
  • 13
  • 1
    If you didn't create the infection, you can never _be sure_ you cleaned all the infection, even after weeks of work disinfecting it. That's why you __must__ have backups. Backups of you work product, your database, your server configs, and everything else you can replace. If, as the OP, you control the server, you should have a backup of the entire system, from the kernel up. –  Feb 23 '17 at 04:04
  • I don't think I could disagree more with kernel-up backups. They're a harmful waste of time that lend themselves to being blindly restored and thinking that fixed the issue. Backups should be descriptive recipes on how to assemble stuff that is unique to you. And like any recipe, they should be tested and refined. Chances are just writing it will make your configuration better. Full backups should only be used for hardware failure, and even then, a scripted recipe would be nearly as fast. – Oli Feb 23 '17 at 11:20
20

The chance is high that you keep the server mostly secure if you do run updates often (i.e. at least daily, instead of only "every so often").

But, critical bugs happen from time to time, like Shellshock or ImageTragick. Also insecure server configuration might make attacks possible. This means that you should also take more actions than just running regular updates, like:

  • reduce the attack surface by running a minimal system, i.e. don't install any unnecessary software
  • reduce the attack surface by restricting any services accessible from outside, i.e. don't allow password based SSH login (only key based), don't run unneeded services etc
  • make sure you understand the impact of critical updates
  • expect the system to get attacked and try to reduce the impact, for example by running services which are accessible from outside inside some chroot, jail or container
  • log important events like failed logins, understand the logs and actually analyze the logs

Still, the most used initial attack vector are probably insecure web applications like Wordpress or other CMS. But your assumption was that the web application is fully secure so hopefully it really is.

Steffen Ullrich
  • 184,332
  • 29
  • 363
  • 424
  • Thank you for your answer with additional actions to consider. I will check out the logs and set up a system to supervise them. – MPS Feb 20 '17 at 08:58
  • 3
    +1 for pointing out attack vector can be insecure web apps – TuringTux Feb 20 '17 at 19:33
6

Most modern Linux distributions come with some sort of automatic update solution. You should consider turning it on on your servers. This will greatly reduce the time your server is vulnerable to attacks.

As you are mentioning Debian, you should consider setting up unattended-upgrades. RedHat has yum-cron, and Suse can get them through YaST.

These upgrades are normally limited to security patches and are unlikely to break your system, however it is not totally impossible. Ultimately it is up to you to weight the risk and benefits of this approach.

Calimo
  • 455
  • 4
  • 9
3

apt-get upgrade only install newer versions of already installed packages. It does not install packages which are not currently installed, and it will not upgrade an already installed package if the newer version depends on a package which is not currently installed.

In Debian and Ubuntu, each version of kernel is put in a separate package, with the version included in its name. Also, there's a virtual package which always depends on the latest available kernel, the dependency being changed with each version. For example, as for now linux-image-generic in xenial depends on linux-image-4.4.0-63-generic. This scheme allows old versions of kernels to be kept installed, and in case the newer turns out incompatible with your hardware.

However, this means that apt-get upgrade will not install newer kernels - you need apt-get dist-upgrade for that. However, many people avoid using it automatically as it can remove packages too. In newer versions of Ubuntu, you can use apt upgrade, which installs new dependencies, but never removes any packages.

Also, when you upgrade OpenSSL you need to at least reload the services which use that library; in Ubuntu the system just asks for reboot to stay on the safe side, and many people also have reservations against automatic reboots.

Neith
  • 211
  • 1
  • 8
  • I do believe apt-get upgrade to install packages and that you are confusing it with apt-get update. – MPS Feb 21 '17 at 02:04
  • 1
    From `man apt-get`: "Packages currently installed with new versions available are retrieved and upgraded; under no circumstances are currently installed packages removed, nor are packages that are not already installed retrieved and installed. New versions of currently installed packages that cannot be upgraded without changing the install status of another package will be left at their current version." I'll clarify the answer. – Neith Feb 21 '17 at 02:11
  • apt-get autoclean && apt-get autoremove will help wth this. – mumbles Feb 23 '17 at 14:05
2

There are some good answers here already. However, I wanted to fill in some gaps and point out a couple of things which don't appear to be addressed in some of the existing answers.

Not directly targeted by some super-hacker or governmental organisation etc

This is a dangerous perspective. Many small organisations have been forced into bankruptcy based on variations of this core idea. You really cannot predict who might find a use or reason for hacking your server. It is precisely due to this type of underlying assumption that a government or super hacker might target your system. They may not be interested in your application or data, they might just want an innocent 'jump off' point to use as part of a larger or more complex hack on a more valuable target.

Normal LAMP Web-server running web app. (Eg. AWS EC2+Apache2+MySQL+Php7)

Be wary of 'normal'. Your setup may not be as normal as you think. A lot depends on what you have installed and what repositories the package comes from. Some of the variations to be aware of include -

  • Distribution type. For example, there is a big difference between Ubuntu LTS and non-LTS distributions. As a rule of thumb, non-LTS distributions will typically have more frequent updates and it will be important to perform updates (or review updates - see below) more frequently.

  • Repository Type. Most distributions, including Ubuntu, have different types of repositories. There are the core repositories maintained by Ubuntu which typically go through fairly robust testing cycles and which receive updates in a fairly timely manner. Then there are 'contrib' type repositories which are not maintained by the core distribution team. These can be 3rd party partners or other users or development groups. What emphasis is placed on these repositories can vary greatly. Sometimes, they will have a high focus on security and a lower focus on stability, others will focus on stability over security. Knowing/understanding the repositories you have installed software from is important.

  • Know what is installed. All too often, you hear of a system that has been compromised only to find the compromise occurred in an overlooked package - either one installed by default or one which is required by the LAMP stack, but is a deeply buried or subtle dependency you were not aware of. Make sure you have removed any unnecessary packages (something which can be hard to determine especially as some packages have some unusual dependencies).

  • Auxiliary Libraries. It is common to overlook additional libraries which have been installed, but which are not managed by the apt ecosystem. For example, a PHP library needed for some special purpose that either was not in the distributions standard load or which needed to be built on the system due to other dependencies. A common example is PHP database drivers for commercial databases. While things have likely improved since the last time I had to manage a PHP based stack, I can recall a time when you had to build the PHP driver for Oracle. While the process was relatively trivial, you had to remember to rebuild that driver after dependent libraries were upgraded to ensure the library was linked against the patched version. This can also be an issue with things like openSSL. A distribution might install an upgraded version of the shared library, but you may need to take additional actions to ensure your application is actually using the upgraded library and not the old library with a known vulnerability.

You should check for updates on a daily basis and then review these updates to assess their importance and potential to break your system. While Ubuntu's apt-update process is fairly robust and rarely breaks things, it does happen from time to time. Due to the emphasis on stability, especially for LTS versions, the update process will tend to be conservative, so you need to review and not just run apt-get upgrade. For example, due to possible dependency issues and the possibility of introducing instability or breakage, sometimes, you will need to do a 'dist-upgrade'. Ubuntu will deliberately do this as a way of making it clear that you need to take care i.e. apt-get upgrade can be trusted to not break things, but may not install all security fixes. apt-get dist-upgrade will work harder to ensure all security updates are applied, but may break things, so the admin must take more care.

The unfortunate thing is that there are no real short-cuts here. The reality is your in an imperfect situation where your a developer who is working for a small business that cannot afford to employ both a developer and a dedicated administrator. You need to apply your limited resources in a way which minimises risks to the business and ensure the business knows what these risks are and understand likelihoods and consequences - they need to be in an informed position and know they have made the decision on an informed basis. Hope for the best, but make sure you have planned for the worse. Have reliable and tested backups. Know what an update will do before you apply the update. Monitor security lists to be aware of new and emerging threats and be in a position to assess what your exposure is and review logs to confirm your assumptions. Check for updates daily. It is quite legitimate to decide not to apply the updates daily, but only after you have assessed what they are and what they patch and are able to make the decision on an informed basis.

Related to point above, no social engineering and the web app itself is secure.

Believing you know and knowing can be worlds apart. I've lost count of the number of times a security assessment has revealed vulnerabilities I was not aware of. It is highly likely there are vulnerabilities in PHP (or any other layer involved) which have not yet been discovered - or worse, have been discovered by the wrong people and have not yet become widely known. When security professionals assess an application, they will never state it is secure. They will say that no vulnerabilities were detected, but that does not mean non exist.

Tim X
  • 3,242
  • 13
  • 13
0

This is definitely a good practice and should be part of your server security routine. Whether your security plan needs more than this one practice is hard to say, but I've never seen nor can I think of any good security plans that don't have this as a foundation.

Not patching installed software and services is a high security risk because the bugs and vulnerabilities that are discovered in those packages often can result in full access to the box by exploiting. That is one of the first, if not the first, thing an attacker will attempt.

But system misconfigurations also are one of the key ways attackers compromise systems. Patching the software alone won't necessarily correct misconfigurations. Sometimes that happens, but that would be because the misconfiguration was born by the original installation of the app.

One thing to look at is if any of your services run as root. There are not many cases where that is a good idea, but it happens that a service is running as root because the person that installed it either didn't know any better, or they couldn't get it to work running as a non-root user. That is just one example.

Whatever operating system is being used, there will be best practices out there for how to harden the OS. Start there, then specifically harden all services/packages installed, such as MySql, apache, etc.

If during deployment the OS and all services were specifically done according to best practices, then after that applying patches might be the only thing you need to do.

I say might because it still is not a certainty. It would depend on the change processes and whether the processes cover maintaining proper security. Are the changes proposed scrutinized for security? Do checkouts include security-specific tests? The answer to those questions is usually no and therefore mistakes by an admin can happen and not be caught.

Thomas Carlisle
  • 809
  • 5
  • 9
0

Basically, you cannot keep a web server secure. There are 0-day exploits, which might get fixed within a few hours after discovery, but already have been exploited by then. A good hosting company might be able to act in a very timely fashion to such threats. However, a Managed Server is expensive (assuming the hoster's management staff is worth anything). They might however decide to take the server offline in case they decide that this is required to eliminate a present threat, giving priority to safety, not availability - which might be the wrong kind of safety if you need the box up and responsive at all time. How much money and how many lives do you lose per hour of outage?

Data leakage might also occur elsewhere. Are the backups securely stored? (secure against remote and physical access; in one case, I was able to demonstrate data exfiltration not from the live web site, but from the backups) How easily can someone break into the server room? There is no need that some super-criminal like Lex Luthor attacks you directly (after all, he's just after the 40 cakes), but you can be collateral damage. And, yes, I have seen secret server farms (case 1: accidentally opened the wrong door, which apparently some idiot had forgotten to lock and some other idiot had decided to have a door handle on the outside, case 2: wanted to find out why there was so much frigging heat coming from behind the chipboard, in some shared storage facility where one could space by the square meter). In theory, operated by professional companies, in practice...well. One might seize the opportunity, lift a few servers or drives, sell them on eBay and good luck after that.

Often overlooked are also remote attacks against the infrastructure. Strictly spoken, this has nothing to do with the security level of the server, but if someone attacks management infrastructure or the proxy which serves the software packages for your apt-get updates, security still gets somehow...degraded. Yes, some people do such things for fun.

I'd assume that AWS EC2 is reasonably safe with regard to such scenarios, but AWS EC2 is only supplied as an example, not as a fact in the OP's question (unlike the statement that the web application itself is safe - this is probably to keep us from discussing all our war stories about SQL injection and so).

Klaws
  • 149
  • 3