3

We develop and maintain web applications; and in our current setup; we are using a three-tier system of database server, application server, and web server.

So far, so good.

The problem we are facing is that while in theory this setup is designed to help us balance load between these machines, and insert new shards as necessary; in practice the strain this is putting on our backroom network is becoming a severe bottleneck.

There is also the matter of serving static files; due to our present setup; these have to either be served by the application (eating up our FastCGI processes available to handle incoming requests), or using a web server as a local proxy on the machine running the application in the first place.

The question then becomes:

Would simply collapsing the web and application server into one; with the simplification of configuration it brings; and the directness of access (local sockets rather than TCP); as well as the increased ability to serve static files through the web server likely give improved performance; or are there overhead factors I've missed?

1 Answers1

2

Generally speaking I'm a fan of divided architectures: A DB server farm, a "public www" server farm (commercial web site), and an app server farm - possibly with other layers of middleware as needed.
The logic behind this is old -- basically that the public www server doesn't need to have access to any sensitive data, so some curious hacker poking around at www.mycompany.com might compromise our marketing site, but they didn't get anything else.

Here are a few other general ideas...


Re: backroom network load, if we're talking about traffic load from contacting your FastCGI hosts I would say sticking a web server on those machines and letting them talk to the outside world directly may be a good idea.
One thing to bear in mind is maintaining isolation/protection of your database in the event of a security compromise, which could be an argument for pushing your FastCGI stuff onto the web servers and writing something more lightweight to talk to your database...

(If we're talking about something else let me know and I'll take a swing at it :-)


Re: the static files issue, there are a number of ways to solve this one.

  • Put a copy of static data on every server that needs to send it.
    Let's call this the "Disk is cheap" method -- It keeps you from dumping a huge amount of load on a single box hosting your static content, and eliminates that single point of failure. The downside is that you have to synchronize that content somehow (deployment scripts, cron'd rsync, cvs/git/svn, etc.)

  • Install a caching infrastructure on your front-end servers
    This is a similar solution to "Disk is cheap" above, only you have one back-end server with the static content, and when the front-end servers need it they cache a copy for $LIFETIME, thus eliminating the need for synchronization scripts (the caching infrastructure does it for you).

  • Put your static content on a content delivery network
    This really only works if you're not doing SSL stuff -- A commercial CDN solves the load problem and gives users geographically distributed points from which they can grab your content. The downside is that most browsers will pitch a fit if you do this with a site that has SSL unless your CDN is also secured (even then paranoid browsers should rightly complain).

voretaq7
  • 79,345
  • 17
  • 128
  • 213