2

I am thinking about buying a new server based development box for development (redundantly redundant, I know ;)). Ideally, I want to run something like ESXi or Xen Hypervisor at the lowest level. Then I want to add (at least) 5 Linux VM's for the following uses:

2 Web Servers 2 Application Servers 1 Database Server

I want to load balance the 2 web servers and the 2 application servers and (somewhat obviously) they need to be all networked together to simulate a production environment.

Also, it used to be the case that the recommendation was to put each VM on it's own hard drive, but I'm not sure that holds water anymore. Any advice?

Does anyone have any advice on how to pull this off? Gotchya's, LookOuts!, etc?

Thanks!

3 Answers3

1

I run production SQL Server environments under VMware ESX 4.0

Some things you have to be aware of is HOW virtualization technologies work. If this is just test environment then all servers don't need 4 GB RAM each. The web servers can probably get by fine on 512MB each, same with App and at least 1 GB for database. I'm part of a Virtual Chapter for PASS that specializes in these kinds of setups you should check it out. It's at http://virtualization.sqlpass.org , no sign-up needed. We syndicate blogs from some of the top virtualization experts in the field today and we do free webcasts on various topics.

Another great resource is Brent Ozar's posts on virtualization (he leads the Virtual Chapter). You can find those at http://www.brentozar.com/sql/virtualization-best-practices/

SQLChicken
  • 1,307
  • 8
  • 10
1

They'd all see each other over the network because they can all have their own network IP and such, but I think you can set them up with a virtual NIC to talk to each other directly as well.

As for the hard disk question, I'd think that you're going to have to consider redundancy. I mean, if your hardware dies, you don't lose one computer. You lose 5. Hard disk failure is fairly common, so usually you'd run RAID 10 or at least RAID 1 on a VMWare server, depending on how hard you're going to push the system for throughput.

That said, even if you got individual drives you could run into contention at the controller level since your machines would be pulling from the same machine.

If you're pushing the systems to the point where you need to consider splitting drives-per-VM, you probably need to reconsider using virtualization, or you'd need to consider multiple machines and full-blown VMWare to implement machine-level balancing and monitoring (which isn't cheap).

We're running about 7 servers on a single RAID 5 array in 16 gig of memory and it's humming along with no problems, but we're not slamming the VM's. You can overload a VM server with just two servers if you're saturating disks or network or memory. It entirely depends on what you're using it for.

My recommendation would be to try the VM solution, and if it doesn't work you have at least one really good system to begin your development farm or if you find a particularly problematic system you can have a VMWare system for some servers and either another box for a second self-contained VMWare system or a separate box for a particular purpose (virtualize 4 servers, have one for the problematic system). You should still end up coming out ahead.

Without estimates on what you're doing with the systems or how hard you're going to be hitting them the traffic/throughput/etc., it's hard to tell you more advice than that.

Bart Silverstrim
  • 31,092
  • 9
  • 65
  • 87
1

I want to load balance the 2 web servers and the 2 application servers and (somewhat obviously) they need to be all networked together to simulate a production environment.

Does anyone have any advice on how to pull this off? Gotchya's, LookOuts!, etc?

Now, you didn't explain what equipment/server you were using so in the abstract, I would say that to "simulate" a production environment will be very difficult in terms of disk I/O. I have no idea what kind of web app you're planning to build/deploy but I have already done (essentially) what you're trying to do, albeit not exactly with your general specifications. The disk subsystem is a big factor that's easy to over look and with database servers it's not difficult to push hard drives. In terms of memory and CPU, as long as there is enough to go around, you can simulate a production environment fairly well, but the odds are your VM environment will be slower overall in comparison to a non-VM'd solution. But the disk subsystem can ultimately affect memory and the CPU depending on what kind of load you anticipate or how you plan to test.

Also, it used to be the case that the recommendation was to put each VM on it's own hard drive, but I'm not sure that holds water anymore. Any advice?

In your your case, it's tough to say. I would approach the VM route as a configuration test more then anything else. Try to get the load balancing correctly and don't sweat the total throughput so much. Trying to optimize performance in a virtual environment is very tricky no matter which vendor you go with. Having one hard drive per VM would be ideal, but even if you could swing that you may not get "real world" results.

It's clear that you're either trying to prepare for a newer environment for this app OR you're hoping to make do with the VM server until you need or have the budget to go for the non-VM solution. In either scenario, building the VM server for the purpose of testing configuration makes perfect sense, but in trying to use it to measure any kind of real world performance/capacity is going to be inaccurate. Virtualization is a trade-off between Operating Systems and hardware resources. You're not going to get identical performance from the virtualized world to the non-virtualized world.

osij2is
  • 3,875
  • 2
  • 23
  • 31
  • "simulate a production environment" was probably too strong. I don't care too much about I/O or speed at this point - I just want to ensure the pieces parts are communicating well and the systems are working as I would expect. Speed / Perf tuning will come a little farther down the line. Thanks! –  Dec 18 '09 at 16:15
  • If that's the case, then you should be fine. Yeah, it wasn't clear if you were looking at performance as a factor but yes, if I/O is not a concern at this time, you should be good to do this all on a VM. Sorry if I misunderstood! – osij2is Dec 18 '09 at 16:21