7

My company will shortly be setting up a blog, and I'm planning on setting up two web servers to host the wordpress website for redundancy. Normally when we do releases to a site in a farm we push to one side, test and then release to the other side. For wordpress updates we can do this easily enough. However the problem becomes how to handle the wp-content folder. As people will be publishing posts, and uploading graphics these will need to be synced to the other server in the farm no matter what server to user uploads them to.

I could setup DFS to replicate the files, but that seams like overkill.

I could setup robocopy to run every 15 minutes or something, then tell everyone that posts to be sure to schedule the post to be published in at least 15 minutes so that the files have time to replicate.

Are there any better solutions out there? Perhaps something a wordpress plugin so that when graphics are uploaded to the post they are automatically replicated to the other servers in the farm?

I'm running Wordpress on Windows 2008 so Linux solutions won't help much.

mrdenny
  • 27,074
  • 4
  • 40
  • 68

6 Answers6

1

I'm not a wizard with IIS, but hopefully the technique will translate over.

I'm presuming that there's a shared hostname that is load balanced between the two servers, and that there is also a publically accessible name for each.

What you want is a conditional redirect on one of the two servers combined with some kind of file sync. If the URI starts with /wp-content and the file exists, serve it locally. Otherwise redirect to the other server. Server A redirects to B and vice-versa.

This should result in a seamless experience for viewers - they'll just get a temporary redirect for images in the window between the post going up and the sync running. Depending on bandwidth or redundancy concerns, your sync interval could be much longer than 15 minutes, since the site should render properly the moment the post goes up.

In nginx, I'd do this with a block like so:

location ~ ^/wp-content {
  if( -f $request_filename ) {
    expires max;
    break;
  }
  rewrite ^/(.+)$ http://otherserver.com/$1 last;
}

nginx is available for Windows, but I doubt you want to switch web server software to do this. Hopefully the idea can be converted over to IIS or whichever software you're using.

James F
  • 6,549
  • 1
  • 25
  • 23
  • I am using IIS. The site's don't have unique public IPs so I'm not sure how I'd be able to redirect them to the other server until the file had replicated. I'd got about 40 other sites being hosted by this server, the blog is going to be one of them, so I can't change my web server. I do like the idea though. I wonder if there's a way do configure IIS (or wordpress) to check over the network share to the other node if the file isn't there locally. Ideally I want a solution which can scale larger than 2 nodes, but I'll take what I can get. – mrdenny Jun 29 '09 at 23:36
1

I'm putting this as a separate answer because it's a different approach:

What about putting the images in cloud storage (Amazon S3 or similar), then having your users use links to the cloud. The bandwidth costs might be a bit higher and there are possibly training issues getting users to upload to the cloud first, but it eliminates the need for local filesystem or cross-servers checks.

It also should scale regardless of the number of servers you deploy.

James F
  • 6,549
  • 1
  • 25
  • 23
  • Our sales guys are going to be using the system. Lets say, they aren't technical. Uploading to S3 is probably beyond what they can do, and I'm trying to do this cost free since we just spent a fortune on servers, storage and bandwidth. – mrdenny Jul 01 '09 at 05:45
  • I know this is old, but there is now a WordPress plugin called "Amazon S3 for WordPress" which automatically syncs uploads in WordPress to a given S3 account. It's transparent to the admin, who uses the same upload functions they always have. – MightyE Sep 14 '10 at 20:43
1

We've used Super Flexible File Synchronizer for stuff like this in the past. It works really well and has a number of options to control syncing.

Adam Brand
  • 6,057
  • 2
  • 28
  • 40
  • I'm liking Super Flexible File Synchronizer. It looks like it has a feature which will detect changes and automatically sync the folders. Have you used this feature? How well does it work? – mrdenny Jul 01 '09 at 05:52
  • I haven't used that feature, but I have used its folder monitoring feature for automatically moving files from one folder into another...that works very reliably. One thing to note is that the "scheduler" can run both as an application and as a service, so you will want to make sure it runs as a service. – Adam Brand Jul 01 '09 at 16:00
0

Is having the content on a single network file share (no DFS) an option?

How about unison?

crb
  • 7,928
  • 37
  • 53
  • I'd rather not. We are trying to avoid all single points of failure, and our file servers are busy enough with the network shares that they already have. – mrdenny Jun 30 '09 at 01:08
0

You can use rsync for this. Otherwise, if you have files under source control, you could use something like Capistrano to be able to roll things out to different machines (and even rollback if necessary).

When you have more than one machine, being able to deploy and rollback is very useful.

Jauder Ho
  • 5,337
  • 2
  • 18
  • 17
  • rsync wouldn't be a bad idea. Robocopy would be easier as it's already on the systems. It doesn't solve the problem of having to schedule the sync though. – mrdenny Jul 01 '09 at 05:43
0

how about robocopy with the following switches: 1. To Detect changes & run sync up - /MON:n :: MONitor source; run again when more than n changes seen. /MOT:m :: MOnitor source; run again in m minutes Time, if changed.

  1. bandwidth saving options - /RH:hhmm-hhmm :: Run Hours – times when new copies may be started. /PF :: check run hours on a Per File (not per pass) basis.

neways, what did u finally use for this issue?

Home Boy
  • 62
  • 4
  • I ended up using DFS to do the work. We had some other sites on the same servers which needed near instant replication of data from server to server, so DFS fit the requirements of the other sites. Since there's no point in running two packages to do the same thing, I ended up using DFS for this site as well. – mrdenny Nov 11 '09 at 02:48