2

WE have a Dell R815 with 6x900GB SAS disks and a perc h700 controller.

We want to use it as a dedicated bare metal vsphere server (stand alone).

We have no idea what the best way to partition the beast.

Currently its one big RAID 5 array, with 3 virtual disks: 0=1.8TB 1=1.8TB 2=500GB

We could just run with this, and install vsphere 5 on disk 2, and I guess we can use both the other partitions for VMS (i.e. create 2 storage pools). However, 500GB is probably a waste for just installing vsphere. Also, we guess that to get best performance, we would want different VMS running on differnetn disks via differnt channels in the controller?

Does anyone have any suggestsions for the best disk/raid layout is?

John Little
  • 103
  • 2
  • 3
  • 6
  • John: No offense but I get the impression that not a lot of thought or planning has occurred up to this point. Your question smacks of "I've got this thing, I'd like to do something with it, what's the best way to do that?". My suggestion would be to sit down, determine what your goals, objectives, and needs are and go from there. For instance, why did you purchase the server, for what specific need or goal? What configuration serves that need or that goal? In addition, "What's the best way to do X?" is an entirely subjective question in this case. What's best for meeting your needs/goals? – joeqwerty Feb 01 '12 at 12:52
  • Good points. We know exactly what we wont to do with it - it has to replicate a production environment with 20 serevers, plus 6 firewalls and about 10 vlans. As its staging, performance and reliability are not absolute requirements, but if we had to chose, we would go with performance. We have searched the vmware site and read about 10 vmware pdf docs, but have not yet found anywhere which discusses or recommends the disk layouts, ufortuantely. – John Little Feb 01 '12 at 13:26

2 Answers2

2

Firstly just don't use RAID5 please - look around on this site, it's one of the most commonly discussed areas and simply put pro sysadmins don't use R5 unless it's data they couldn't care less about - it's horrid, use R6 or R1/10 - there's no excuse for not doing so.

Then onto your question, create one R6/10 array then carve it into one small boot disk (say 10GB), then into <=2TB disks, leaving any extra at whatever it works out to be.

vSphere v5 can deal with >2TB virtual disks but there's a lot of caveats around that and if you don't need to deal with >2TB virtual disks then I'd avoid the complexity. Don't worry about trying to optimise with such a small system, oh and if you've got StorageDRS just set all disks but the boot one (best to delete any datastore the installer may create on that) into one group and let it manage itself.

Chopper3
  • 100,240
  • 9
  • 106
  • 238
  • Excelent advice, thanks. Have read up about raid 6 - is potentially slower and wastes more disk than raid 5, so will stick with raid 5, as performance is more important than reliability (and rebuild time is not an issue) – John Little Feb 01 '12 at 13:23
  • If anyone has any links to any articles about this subject, we would be very greatful. – John Little Feb 01 '12 at 13:27
  • @JohnLittle See [here](http://serverfault.com/q/339128/72586) - the main concerns to have about RAID5 is not the long rebuild time (which is generally about the same as any other rebuild of a full drive, just requires more reads); what you'll want to be careful of is the increased odds of a unrecoverable read error on another disk during a rebuild (causing unrecoverable data corruption) and the write hole (which can silently corrupt data, and can be mitigated somewhat with a battery-backed or non-volatile write cache). – Shane Madden Feb 01 '12 at 17:20
1

With virtualization you will have a lot of random writes which is really the performance downfall of RAID5 or 6; in our testing, even a raid controller with a gig of cache memory could not make up for this. Also, RAID6 is typically not much slower than RAID5 and gets used mainly when the array is made up of larger (>1TB) or slower rpm (<10K) drives and/or larger arrays (>8 spindles). In these cases there is a greater chance of a second failure while the first is rebuilding resulting in a catastrophic event if using RAID5.

I would make do with a single RAID10, 1.8TB volume configured in a single VMFS datastore. If you need more space, you're going to either suck up the performance hit and go with RAID5 or purchase more drives. You don't have very many drives so there's not much opportunity for carving up into separate volumes to handle competing workloads.

Install to and boot ESXi from a USB stick create an image backup to a second USB stick just in case that fails. Our HP DL380 boxes have an sd card slot on the motherboard for this purpose; maybe the Dells have this too? Boot from SAN/PXE is another potential option for ESXi. Consider also keeping a cold-spare drive on the shelf or adding a hot spare for that inevitable day of a drive failure.

Based on the mostly windows environment around here with Exchange, file shares, and a smattering of SQL and application Servers I would guess that you should plan on adding an additional controller and disk shelf as you will find an I/O performance hit with 20 vm guests on only 6 drives. Keep an eye on your datastore overall latency numbers; when it goes above 15-20ms people will start to notice. You may end up with only some servers virtualized. Beware also of VM sprawl as users get wind of this new-found ability to spin up a server in mere seconds. ;)

JGurtz
  • 523
  • 5
  • 13