8

How can a paranoid sysadmin confidently stay up-to-date with the latest stable PHP versions? (security fixes have been coming in pretty regularly).

This is a Production server, and so "breaking stuff" is scaring my guy to death. Downtime for maintenance isn't the issue.

Specifically, we're running a recent Suse Enterprise Linux, but a generic or more general answer is perfectly acceptable.

How do you handle security updates to production machines? What are we so ignorant of that this guy is so scared to just use the package manager to "update"?

Any advice?

masegaloeh
  • 17,978
  • 9
  • 56
  • 104
anonymous coward
  • 615
  • 2
  • 8
  • 15
  • 2
    being paranoid about ANY patch php, gtk+ wrapper, windows driver update, etc is a good thing IMHO - this is more than just PHP it's a general patch philosiphy see http://serverfault.com/questions/104665/updating-production-ubuntu-boxes-the-dos-and-donts for a good discussion of patching in general. – Zypher Mar 05 '10 at 15:36
  • @Zypher Thanks for the link. Situation here isn't ideal, but we're working towards that. Clear evidence that things need to hurry up and get there. =) – anonymous coward Mar 05 '10 at 15:58

4 Answers4

6

I handle PHP the same way I handle everything else: Upgrade the development environment (a VMWare clone of production) first, regression test the hell out of it, then promote it to production using the same deployment templates we used for the VMWare hosts. (If you're using package managers to do your upgrades you would use the same packages).

As an extra layer of insulation our production environment is comprised of paired redundant hosts, and one host is taken out of the production rotation for its upgrade, then tested thoroughly before we switch over to that host to upgrade its partner.

As a general rule security updates are applied as soon as practical, and non-security/non-critical bugfix updates are applied quarterly to minimize downtime.

voretaq7
  • 79,345
  • 17
  • 128
  • 213
  • Having the redundancy is is great. We're moving a lot of things to virtual, which will make this a lot easier. Just making sure my assumptions weren't insane. =) Thanks for your answer. – anonymous coward Mar 05 '10 at 15:54
  • VMs are a wonderful invention -- In theory the redundant production systems could all be VMs which is a **huge** cost savings if you're paying for your own power :) Other advantages include the ability to snapshot your VMs before you do the upgrades for near-instant rollback. – voretaq7 Mar 05 '10 at 17:06
4

PHP is on my top list of things to keep updated to current version. I trust it less than most things.

Ultimately, your best bet is to review every changelog from your current version to latest and tangibly weigh the risk.

If you are talking upgrading minor versions, such as 5.3.1 to 5.3.2, I wouldn't worry too much.

If you're upgrading from 5.2.x to 5.3.x, you're likely to introduce some compatibility issues.

If you're using system packages, typically distributions will not introduce upgrades that will break existing performance. RHEL and CentOS patch old versions for fixes until a major distribution release comes out. The do the testing for you, typically, which reduces risk. I would expect SuSE enterprise to be similar.

For upgrade paths, the best bet would be to build a test server and test the application against the latest version before upgrading production.

Warner
  • 23,440
  • 2
  • 57
  • 69
  • I did think that the package-management systems used vetted, *generally* break-free packages, as long as your box wasn't terribly ravaged in between. Nice to have that affirmed. Thanks. – anonymous coward Mar 05 '10 at 15:55
  • Package management can be very hit or miss -- it usually works fine, but I've had patch-level bumps in a package blow up in my face even when the upstream vendor says no comatibility-breaking changes were introduced. Definitely a test-before-you-deploy situation in my experience. – voretaq7 Mar 05 '10 at 17:24
  • Were you using only system packages or did you have software you compiled yourself as well? – Warner Mar 05 '10 at 17:32
1

Another, less appreciated answer is to build a whitelist of allowed urls and features. In Apache you can do this by combining the proxy and rewrite features.

Basically, you make two installs, one that has a stripped down configuration: Proxy, rewrite, and no code execution; etc. Any "allowed" URL (with parameters, etc) gets proxied to the second install.

Then, add yourself to PHP's developer list, and monitor the release notes carefully. Any time you see something that looks like it could be a security vulnerability, you build a shim in the first install to detect this kind of failure, and send the user an error.

In a setup like this, you'll want to redirect POST to a filter (if you need POST at all; some sites get by just fine by allowing POST only from some IP addresses!) that can look for allowed sources, and pre-validate everything.

Such a whitelist is very time consuming to set up, but for mission critical apps that need to run for longer than PHP's stable lifespan (which seems to be only a few years), this can be an excellent way to leverage the large number of PHP applications without getting their vulnerabilities as well.

geocar
  • 2,307
  • 14
  • 10
1

In addition to the above you can enable package roll backs just in case.

Then if something does break on production that you were absolutely sure was working fine on development, you can at least undo the change quickly before troubleshooting the problem.

See Rollback YUM package for an example in Yum. I am sure other package management systems have similar.

I know it is belt and braces and I agree with Warner with point releases. Minor changes should not break anything. Personally I have not had any problems in PHP upgrades, but it is always better to be safe than sorry.

Richard Holloway
  • 7,256
  • 2
  • 24
  • 30