12

We have an ecommerce app that we develop at our company. Its a reasonably standard LAMP application that we have been developing on and off for about 3 years. We develop the application on a testing domain, here we add new features and fix bugs etc. Our bug tracking and feature development is all managed within a hosted subversion solution (unfuddle.com). As bugs are reported we make these fixes on the testing domain and then commit changes to svn when we are happy the bug has been fixed. We follow this same procedure with the addition of new features.

It is worth pointing out there the general architecture of our system and application across our servers. Each time a new feature is developed we roll this update out to all sites using our application (always a server we control). Each site using our system essentially uses exactly the same files for 95% of the codebase. We have a couple of folders within each site which contain files bespoke to that site - css files / images etc. Other than that the differences between each site are defined by various configuration settings within each sites database.

This gets on to the actual deployment itself. As and when we are ready to roll out an update of some kind we run a command on the server that the testing site is on. This performs a copy command (cp -fru /testsite/ /othersite/) and goes through each vhost force updating the files based on modified date. Each additional server that we host on has a vhost that we rsync the production codebase to and we then repeat the copy procedure on all sites on that server. During this process we move out the files we dont want to be overwritten, moving them back when the copy has completed. Our rollout script performs a number of other function such as applying SQL commands to alter each database, adding fields / new tables etc.

We have become increasingly concerned that our process is not stable enough, not fault-tolerant and is also a bit of a brute-force method. We're also aware we are not making best use of subversion as we have a position where working on a new feature would prevent us from rolling out an important bug fix as we are not making use of branches or tags. It also seems wrong that we have so much replication of files across our servers. We're also not able to easily perform a rollback on what we have just rolled out. We do perform a diff before each rollout so we can get a list of files that will be changed so we know what has been changed after but the process to rollback would still be problematic. In terms of the database i've started looking into dbdeploy as a potential solution. What we really want though is some general guidance about how we can improve our file management and deployment. Ideally we want the file management to be more closely linked to our repository so a rollout / rollback would be more connected to svn. Something like using the export command to make sure the site files are the same as the repo files. It would also be good though if the solution maybe would also stop the file replication around our servers.

Ignoring our current methods it would be really good to hear how other people approach the same problem.

to summarise ...

  • What is the best way for making files across multiple servers stay in sync with svn?
  • How should we prevent file replication? symlinks / something else?
  • How should we structure our repo so we can dev new features and fix old ones?
  • How should we trigger rollouts/rollbacks?

Thanks in advance

EDIT:

I have read a lot of good things recently about using Phing and Capistrano for these kind of tasks. Can anyone give any more info about them and how good they would be for this kind of task?

robjmills
  • 990
  • 8
  • 24

2 Answers2

6

My advice for doing releases is to have Feature Releases and Maintenance Releases. Feature Releases would be the releases that get new features. These get added to your subversion trunk. When you think these are feature complete, you branch these into a release branch. Once your QA process is happy with this release, you tag the release and deploy the code to your servers.

Now, when you get a bug report, you commit this fix to the branch and port it to the trunk. When you're happy with the number of bugs fixed, you can tag and deploy a Maintenance Release.

It's important that you have a branch of your live code base (or have the ability to create one by knowing the live revision) that is separate from your development branch, so that you have the ability to deploy fixes to your live code without having to deploy new features or untested code.

I would recommend using your distribution's native packaging system for deploying new code. If you have a package that contains all your code base, you know all your code has been deployed in a sort of atomic operation, you can see what version is installed at a glance, can verify your code base using your packages checksumming. Rolling back is just a case of installing the previously installed version of the package.

The only roadblock I can see to you implementing this is that you appear to have multiple copies of the code base for different customers running on a single server. I would attempt to arrange your code so that all customers run off the same files and don't use copies. I don't know how easy that would be for you, but reducing the number of copies you have to deal with will massively reduce your headache.

I'm assuming that as you mentioned LAMP, you're using PHP or another scripting language, which doesn't require a compilation process. This means you're probably missing out on a wonderful process called Continuous Integration. What this basically means is that your code is continuously being tested to make sure it's still in a releasable state. Every time someone checks in new code, a process takes it and runs the build and testing process. With a compiled language you'd usually use this to make sure the code still compiled. With every language you should take the opportunity to run unit tests (your code is in testable units isn't it?) and integration tests. For websites, a good tool to test integration tests is Selenium. In our Java builds, we also measure code coverage and code metrics to see how we progress over time. The best CI server we've found for Java is Hudson, but something like buildbot might work better for other languages. You can build packages using your CI server.

David Pashley
  • 23,151
  • 2
  • 41
  • 71
  • thanks. yes we are using PHP. I must admit i'm not too up on continuous integration, from what i know its very similar to unit testing but i dont know much more that that. We are keen on unit testing but our codebase still has a lot of legacy procedural code that doesnt really lend itself to unit tests. some interesting ideas though, would be good to hear ideas you have about how our code could be better organised to prevent the replication. – robjmills Oct 11 '09 at 13:59
  • continuous integration is literally just running automated testing on every checkin or every hour or every day. Just so long as you do it regularly and automated, that's pretty much CI. – David Pashley Oct 11 '09 at 14:48
  • i saw this article today about using hudson alongside PHP and Phing - http://toptopic.wordpress.com/2009/02/26/php-and-hudson/ – robjmills Oct 13 '09 at 10:58
1

We started using Puppet (flagship product of Reductive Labs). It is a Ruby-based framework for automating sys-admin jobs. I was at puppetcamp a couple of weeks ago and here are the video links:

Luke Kanies Presenting - Puppet Intro

Also, if you'd like to see all the presentations made at puppetcamp in san francisco, this is the link:

Presentations made on how others used Puppet

Enjoy.

Nikolas Sakic
  • 492
  • 2
  • 8