1

We're a small software company with only one product - an web site (8 million visits per month) that is load balanced (around 20 servers for web serving).

At the moment we do weekly releases, aiming towards continuous deployment.

Our servers are runnning Centos, our clients Mac OS X.

We're currently evaluating different packaging systems:

  • RPM
  • subversion + some shell scripting (creating a "production-svn-tree" separate from the source code tree)
  • our self-made "packager" that consists of tar-archive plus some scripting - current problem is that there's no logic for downgrading (installing a non-current version), and no possibility to delete files - IMO adding those things includes quite a bit of work

I wonder if some of you have experience with using packaging systems for deployments and could give some insights.

hansaplast
  • 197
  • 1
  • 2
  • 12

5 Answers5

2

I have used RPM packaging for deployment, and loathe it compared to Debian packaging. Using a package gets you a lot of benefits, like setting up dependencies, apache config, logrotate, cronjobs, post inst scripts, etc, as well as just the source code and permissions. Being able to use debconf to ask questions from the user (eg what url should I serve this web app on?) and then template the answers into the apache config is really useful. However, as far as I can tell, there is no debconf-like equivalent for RPM, which means that you end up having to edit config files manually and can't easily install new versions from the package.

I generally think that just installing from source control on servers misses the point, because for a complicated application it's only part of the story. So given your three options above, I would go for 3.

  • Thanks for your answer - sounds experienced. Some questions: Apart from debconf (which IMO we currently have no need for), are there other reasons why you prefer debian packaging to RPM? Then I forgot to name the limitations of our current tool - namely there's no downgrading (rollback) and no deleting of files possible. Plus we don't want to manage a home-made solution - there MUST be ready-to-be-used solution out there, no? – hansaplast Sep 24 '10 at 08:03
  • You might find if you had Debconf available to you, that you actually did need it :) We have a reasonable straight forward webapp that we manage with Debian packaging, which has something like 15 questions, although if you're only ever deploying to the same place I guess you might need need it. However even something like having a different staging url to the production url can be asked as a debconf question and templated into the apache conf. Other than that, I personally find debian packaging easier than RPM, but that's personal opinion mostly. –  Sep 24 '10 at 09:07
  • About downgrading - we put into the Debian postinst script a call to dbconfig-common (db abstraction for packages) to do a pgdump before upgrading anything. That, combined with a previous version of the package (assuming you archive properly or tag the version you build each package from) should get you a reasonable rollback. –  Sep 24 '10 at 09:09
  • 1
    It's "nice" that you prefer Debian to RHEL, and there are many people who would say the opposite (of which I'm one). However I wouldn't recommend using a custom packaging solution instead of one compatible with the OS you are using, there are many better alternatives than that. – James Antill Sep 24 '10 at 19:17
  • If they were starting from scratch, sure. However they do already have a custom solution that seems to be at least somewhat working, since keeping it was one of the three options. If keeping it is still preferable to RPM, then it makes sense to keep it, not implement something worse. Whether you or I prefer Debian or RHEL is beside the point. –  Sep 28 '10 at 14:24
2

I would suggest Capistrano which, while not a packaging system, is specifically designed to deploy code to one or many servers. It deploys directly from your version control system (svn, git, mercurial, etc.) to the servers, performs any scripting you need, runs database migrations, etc.

It keeps a number of previous versions on the servers, allowing you to roll back in seconds in case of unexpected trouble.

Furthermore it provides for multi-server deployments, deploying to multiple server at once, rolling them all back to the previous version if any part fails to properly deploy.

Capistrano originates in the Ruby world, but is widely used today. It may look simple but is a very powerful tool and comes highly recommended. My company uses it to deploy dozens of websites to multiple servers.

Because Capistrano is a command-line tool we use Webistrano, a Web GUI to manage and run Capistrano in a user-friendly way.

Martijn Heemels
  • 7,438
  • 6
  • 39
  • 62
  • We're using puppet and want to integrate our packaging solution into puppet - do you see a solution to combine puppet and capistrano? (as puppet is "pull" and capistrano is "push" for me it seems not compatible) – hansaplast Sep 25 '10 at 19:04
  • I'd suggest using the right tool for the job. They supplement each other nicely. Use Puppet to bring and keep the servers into the state you expect, ready for them to accept apps. But Puppet is currently not very suited for performing things 'right now' and coordinated across hosts. Use Capistrano to push the app to the server. It will make sure they all complete succesfully. I noticed most devops fix the deployment problem in this two-tiered way. – Martijn Heemels Sep 26 '10 at 15:51
  • Here's an interesting blog that has written about your issue: http://dev2ops.org/blog/2009/11/2/6-months-in-fully-automated-provisioning-revisited.html – Martijn Heemels Sep 27 '10 at 16:10
1

Given the simplicity of your situation, you could probably get away with using rsync or NFS mounts to distribute the code and then some tiny piece of code to "update" from running one version to another (this is what I assume you mean by option #2).

However, if you want something better than that, I'd strongly recommend using the native packaging system to deliver the code (you then get "free" integration with all the native packaging tools). Of course this takes some skill, to create good packages ... but that investment should pay for itself. On the other side of that coin, using a non-native packaging format is something you'll have to pay for time and time again.

As another poster said, you may then want to use a config. management system on top of that (but again, any good config. management system should integrate with native packaging ... so any investment there will still pay).

As to some of the responses implying "dpkg rules, rpm sucks" I would suggest that if you are inclined to listen to them at all, then just move your servers to Debian and use native packaging.

James Antill
  • 731
  • 1
  • 5
  • 10
  • Yay, thanks for the answer - I also think putting some effort into learning to create good RPM packages will pay off. Do you know some good "RPM primer" - or how did you learn about RPM? – hansaplast Sep 25 '10 at 12:37
  • 2
    Probably the best primer (after the very basics) is the Fedora packaging guidelines: http://fedoraproject.org/wiki/Packaging/Guidelines – James Antill Sep 25 '10 at 14:58
0

I would seriously advise you to choose a configuration management system. There are a few open source project to choose from: cfengine, puppet, bcfg2, ...

Cfengine is the most widely used (17 years of experience, there are deployments up to +60000 servers like facebook). There is paid support if you require it cfengine.com

Puppet is quite popular among newcomers to the field because it appears simpler (that is a matter of taste, IMO) but has a huge problem: it depends on ruby, so you have to install that moving target. See this blogpost of the debian ruby maintainer to get an opinion about if you want to use this mess to manage your infrastructure: http://www.lucas-nussbaum.net/blog/?p=566; they have paid support as well.

I know no one running bcfg2, it looks nice.

O, yes, and there is another ruby tool, chef. Also a mess to install.

The final goal is to just kickstart (you use centos) a new server and let it autoconfigure itself. The config management will take care of everything. You need to change something on all servers? No problem, write a policy and it will deploy itself during the next run of the software. It takes a while to set up (not extremely long, though, but as with all new things, you need to get the feeling) but you will wonder how on earth you were able to live without it.

natxo asenjo
  • 5,641
  • 2
  • 25
  • 27
  • we're using puppet - but I don't see how puppet could deploy our frontend code without using any sort of packaging/source revision infrastracture. Our idea was to use puppet to trigger the package-updates – hansaplast Sep 24 '10 at 07:55
  • ok, I misinterpreted your question :(. If the application you have to deploy on all servers is exactly the same, why not mount a nfs share in a central filer from all the servers? You can then use version control to commit, roll back, etc..., in just one place. Then filer becomes then your single point of failure, so you should have a clustered nfs server with gfs. – natxo asenjo Sep 24 '10 at 11:02
  • Chef is not a fork of Puppet and the two pursue quite different approaches to configuration management. – joschi Sep 25 '10 at 06:06
  • ok, I'll correct the fork thing. What I mean is: it is ruby based, so all the disadvantages that apply to puppet also apply to chef. – natxo asenjo Sep 25 '10 at 06:31
  • Which is also not true, by the way. Just because they both use Ruby as their implementation language doesn't mean, that they have the same advantages or disadvantages. – joschi Sep 25 '10 at 06:41
  • From the sysadmin's point of view, yes. They're both ruby. Obviously you have not read the link about the ruby debian FAQ ;-) – natxo asenjo Sep 25 '10 at 08:14
-1

Following Occam's razor and Einstein's simplicity principle, I'd go with number 3. As far as I could get it from my limited experience with this system (I mostly use Deb, Arch and BSD), RPM is pretty a bloatware, probably far overcomplicated for your needs. I can hardly see benefits of introducing additional SVN layer. The most logical it seems to use some scripting to build tarballs from SVN tags and some to deliver and deploy. IMHO.

Ivan
  • 3,288
  • 19
  • 48
  • 70
  • I forgot to add the current limitation of our home-made option 3 - there's no such thing as downgrade (rollback, in our situation) and deleting of files - that lead us to consider switching to RPM. RPM being bloatware - I guess once we figured out how it works we could script it so it would be fairly simple, no? – hansaplast Sep 24 '10 at 07:56