No. This is not enough to keep you secure.
It'll probably keep you secure some time but security is complex and quick paced so your approach really isn't good enough for long-term security. If everybody made the same assumptions as you're making in your question, the internet would be one big botnet by now.
So no, let's not limit this question to packages. Let's look at server security holistically so anybody reading this gets an idea of how many moving pieces there really are.
APT (eg Ubuntu's repos) only covers a portion of your software stack. If you're using (eg) Wordpress or another popular PHP library and that isn't repo-controlled, you need to update that too. The bigger frameworks have mechanisms to automate this but make sure you're taking backups and monitor service status because they don't always go well.
You wrote it all yourself so you think you're safe from the script kiddies? There are automated SQL injection and XSS exploit bot running around, poking every querystring and form alike.
This is actually one of the places where a good framework helps protect against inadequate programmers who don't appreciate nuances of these attacks. Having a competent programmer audit the code also help allay fears here.
Does PHP (or Python, or whatever you're running) really need to be able to write everywhere? Harden your configuration and you'll mitigate against many attacks. Ideally the only places a webapp is able to write are a database, and places where scripting will never be executed (eg a nginx rule that only allows serving static files).
The PHP defaults (at least how people use them) allow PHP to read and write PHP anywhere in the webroot. That has serious implications if your website is exploited.
Note: if you do block off write access, things like WordPress won't be able to automagically update themselves. Look to tools like wp-cli
and get them to run on a scheduled basis.
And your update schedule is actively harmful. What on earth is "every so often"? Critical remote security bugs have a short half-life but there's already a delay between 0-day and patch availability, and some exploits are also reversed engineered from patches (to catch the slow-pokes).
If you're only applying updates once a month, there's a very strong possibility you'll be running exploitable software in the wild. TL;DR: Use automatic updates.
Versions of distributions don't last forever. If you were sensible and picked a LTS version of Ubuntu, you've got 5 years from initial release. Two more LTS versions will come out within that time and that gives you options.
If you were on a "NEWER IS BETTER" rampage and went with 16.10 when you set your server up, you've got 9 months. Yeah. Then you have to upgrade through 17.04, 17.10 before being able to relax on 18.04 LTS.
If your version of Ubuntu lapses, you can dist-upgrade all day long, you're not getting any security upgrades though.
And the LAMP stack itself isn't the only attack vector to a standard web server.
- You need to harden your SSH configuration: only use SSH keys, disable passwords, shunt the port around, disable root logins, monitor brute attempts and block them with
fail2ban
.
- Firewall off any other services with
ufw
(et alii).
- Never expose the database (unless you need to, and then lock down the incoming IP in the firewall).
- Don't leave random PHP scripts installed or you will forget them and they will get hacked.
There's no monitoring in your description. You're blind. If something does get on there, and start pumping out spam, infecting you webpages, etc, how can you tell something bad happened? Process monitoring. Scheduled file comparison against git (make sure it's read-only access from the server).
Consider the security (physical and remote) of your ISP. Are the dime-a-dozen "hosts" (aka CPanel pirates) —sqwanching out $2/month unlimited hosting plans— investing the same resources in security as a dedicated server facility? Ask around and investigate the history of breaches.
Note: A publicised breach isn't necessarily a bad thing. Tiny hosts tend not to have any record and when things are broken into, there aren't the public "post-mortems" that many reputable hosts and services perform.
And then there's you. The security of the computer you code all this stuff on is almost as important as the server. If you use the same passwords, you're a liability. Secure your SSH keys with a physical FIDO-UF2 key.
I've been doing devops for ~15 years and it is something you can learn on the job, but it really only takes one breach —one teenager, one bot— to ruin an entire server and cause weeks of work disinfecting work product.
Just being conscious about what's running and what is exposed, helps you make better decisions about what you're doing. I just hope this helps somebody start the process of auditing their server.
But if you —the everyman average web app programmer— are unwilling to dig into this sort of stuff, should you even be running a server? That's a serious question. I'm not going to tell you you absolutely shouldn't, but what happens to you when you ignore all this, your server is hacked, your client loses money and you expose personal customer information (eg billing data) and you're sued? Are you insured for that level of loss and liability exposure?
But yeah, this is why managed services cost so much more than dumb servers.
On the virtue of backups...
A full system backup is possibly the worst thing you could keep around —for security— because you'll be tempted to use it if you get hacked. Their only place is recovering from a hardware failure.
The problem with using them in hacks is you reset to an even earlier point in time. Yet more flaws in your stack are apparent now, even more exploits exist for the hole that got you. If you put that server back online, you could be hacked instantly. You could firewall off incoming traffic and do a package upgrade and that might help you, but at this point you still don't know what got you, or when it got you. You're basing all your assumptions off a symptom you saw (ad injection on your pages, spam being bounced in your mailq). The hack could have been months before that.
They're obviously better than nothing, and fine in the case of a disk dying, but again, they're rubbish for security.
Good backups are recipes
You want something —just a plain-language document or something technical like an Ansible/Puppet/Chef routine— that can guide somebody through to restoring the entire site to a brand new server. Things to consider:
- A list of packages to install
- A list of configuration changes to make
- How to restore the website source from version control.
- How to restore the database dump*, and any other static files you might not version-control.
The more verbose you can be here, the better because this also serves as a personal backup. My clients know that if I die, they have a tested plan to restore their sites onto hardware they control directly.
A good scripted restore should take no more than 5 minutes. So even the time-delta between a scripted restore and restoring a disk image is minimal.
* Note: database dumps must be checked too. Make sure that there aren't any new admin users in your system, or random script blocks. This is as important as checking the source files or you'll just be hacked again.