-1

Possible Duplicate:
Is it necessary to have RAID in a virtual machine?

I am in the processing of building up a new VM Host with Essentials 5.1. The server will house several guest OSes including several Ubuntu Linux boxes. The host itself is RAID 5 with 10 total drives, the 10th drive acting as a Global Hot Spare.

My question is, with the host configured for RAID, should I configure the Ubuntu Guest OSes with software RAID as if the box was standalone with all the drives or would this be unnecessary? I understand the reasoning for RAIDing a standalone box with the same setup through Linux RAID but since the drives are managed by the host and already in a redundant configuration, should I just create a single VMDK for the server?

Thanks

rws907
  • 221
  • 2
  • 8

2 Answers2

3

It's absolutely unnecessary. You do NOT need to add another layer of RAID protection within your virtual machines when using VMWare ESXi on supported hardware.

Just create normal VMDK's of the appropriate size for your virtualized guests.

ewwhite
  • 194,921
  • 91
  • 434
  • 799
-1

Consider the following:

  • Avoid RAID 5 for virtualization, especially if you're going to serve websites from the guests. Raid 5 gives a reasonably good read performance, but if your workload involves some writing of small files, then again avoid it!

with the host configured for RAID, should I configure the Ubuntu Guest OSes with software RAID as if the box was standalone with all the drives or would this be unnecessary?

Creating a RAID within a RAID (5) will kill the performance out of it and will not give you any benefits, as the enclosing volume is still the same one. So imagine yourself loosing the big RAID5 volume, how would you recover the Software RAID out of an inaccessible RAID volume?

I would suggest you to break the RAID 5 into a RAID50, at least you will mitigate the write penalty and lower the performance impact of a rebuild process in case of failure of a single drive. Split your 10 drives like this:

4 HDD RAID5[0] + 4HDD RAID5[1] + 2HS

Or if you feel lucky don't use any hotspare and build your array with 5 drives per span.

Martino Dino
  • 1,145
  • 1
  • 10
  • 17
  • 3
    This is actually bad advice. RAID 5 is fine as long as you have a large enough write cache. Most SAN vendors will recommend RAID 5 or 6 for VMs and just spec out the right amount of cache to negate the write penalty. You rarely ever need RAID 10 if your cache is spec'd correctly. Also, RAID 50 is **more** dangerous in certain situations than a RAID 5, though definitely faster. In almost any case that you'd use RAID 50 because of the increased fault tolerance, you'd be better off with RAID 6 and the proper amount of write cache. – MDMarra Jan 02 '13 at 20:46
  • Well I definitely don't know what experience you have in SAN environments yet from my own I can tell for sure that RAID50 benefits more both in terms or reliability and speed. Try to check workload impact and rebuild times of an actively used RAID 5 9/10HDD volume, and you'll see what I mean, then there are the benefits of splitting the I/O impact of patrol reads onto 2 separate volumes... – Martino Dino Jan 03 '13 at 12:41
  • This is probably very implementation specific. Many vendors like EMC have a set number of disks that can be in a RAID group. RAID 5 on the VNX series is limited to 5 disk groups, for example. You then use storage pools and FAST VP to tier your storage on a way that causes the LUNs to exist in 1GB slices across all RAIDs in a given pool. It seems like many other vendors have similar technology in newer storage nodes. Certainly rebuild time is a factor, but like I said before, performance really doesn't factor in if you have the proper amount of cache. – MDMarra Jan 03 '13 at 13:16
  • Well obviously EMC got the point, and even without noticing you're setting up soemthing which is more similar to RAID50 than to anything else. – Martino Dino Jan 03 '13 at 13:50
  • It's actually nothing at all like RAID 50. The 1GB slices float between different tiers of disk, so some of the LUN can exist on SSD, 15k, 10K, and 7.2K. – MDMarra Jan 03 '13 at 14:00
  • Thanks for all of this technical differentiation. All drives are 10k 2TB 6Gbps SAS drives. The budget does not grant me enough room for a large SAN so I need to find a good price point for fast-ish drives that can give me the max amount of storage for the $$ I have to work with. – rws907 Jan 03 '13 at 16:23