4

I have been handed the challenge of adding 4 additional drives to an IBM ThinkServer with a RAID 5 array on an MS Windows Server 2008 Std. living life as a SQL server. First question is "is it even possible to add additional drive to an existing RAID?" The second is "If I manage to get the drives added, will it mess with the database?"

Thanks in advance.

Ward Duncan
  • 41
  • 1
  • 2

3 Answers3

2

The answer to your first question depends on your RAID card.

  • HW RAID: All server cards I worked with support extending an array. Cheaper versions might not and migrating the array to one with more disk might take a long time. Depending on RAID card, drive speed and the size of drives this could take days. Access to your array will be slow while this process is running.
    I recommend to make a backup first. And then check the backup.
  • Software RAID: no experience with windows based software RAID.
  • Fake RAID: RUN AWAY. (Or make sure you have excellent backups).

If your HW RAID card does not support it you will have to make a backup, test it, delete the array and create a new array. (and restore the data). This means downtime for the server.

The same procedure will work with software RAID and fake RAID, but it also means downtime for the server.


The second question is a bit longer: Extending the array to include more drive will likely expand the size of the (single, virtual) disk which windows sees. It will not change the size of the partitions on the drive. You will have do do this later.

Summarized: No, it will not mess with the database. But nor will it get you where you want to be until you grow the partitions.

Hennes
  • 4,772
  • 1
  • 18
  • 29
  • Deleting the array and restoring from backup means downtime, period. It doesn't matter whether you are using software or hardware raid. – psusi Jul 19 '12 at 01:23
  • That is why some hardware RAID cards allow you to migrate between RAID levels or to add disks to an existing RAID, all while the RAID is actively being used. It is just somewhat slower when this process is running. If you do that on a setup with logical volumes then you can first migrate the RAID and then extend the lvm volume. All without shutting down. – Hennes Jul 19 '12 at 01:28
  • Right... I think I see now that you mean that the downtime applied to the technique rather than the type of raid, but it didn't sound like it the first time I read it due to the way you worded it. You might want to move the downtime clause to the prior sentence so it is clear. – psusi Jul 19 '12 at 01:49
1

Yes it's usually possible to add drives to most arrays but we'd need to know if it was a hardware or software RAID array. That said please be VERY careful with RAID 5, it's pretty much detested in pro sysadmin circles, especially when combined with large slow SATA drives as the math works out that basically any time you replace a disk you're almost certain to incur at least one unrecoverable error - losing your data. So try to use RAID 6 or 10 if you can, some RAID controllers let you live migrate from 5 to 6 without downtime, see if yours can.

Chopper3
  • 100,240
  • 9
  • 106
  • 238
  • 1
    Agreed that large raid 5 arrays are a time bomb. A 5 drive set isn't bad, but anything you're adding 4 additional drives to is just asking for trouble. – Hyppy Jul 18 '12 at 20:42
0

RAID should be transparent to your applications and OS. It's presented as a single volume to your OS, so it won't know the difference (unless you're doing something silly like using software RAID or fakeRAID).

And yes, it is possible to add disks to a RAID5 array. Whether it's possible with your array will depend, specifically on the capabilities of the specific RAID card you're using. If you can post that detail, I can probably help you out.

On the other hand, RAID5 with many disks or large disks is a horrendously bad idea (almost certain to get a read error on the single parity bit, rendering your array useless), so I'd probably advise against doing this. For the general case, at a minimum, I'd want RAID6, and generally prefer RAID10. And for reasons I hope are obvious, I also prefer to have my OS on a different array or disk from my data partition. Makes it a lot easier if I need to change things up later, like converting a RAID5 array to a RAID6 or 10... for example.

And, as pointed out in the comment below (thanks Hennes!), selecting an appropriate RAID level for your databases is much more involved than just slapping it on whatever the OS has. (Another argument for segregating OS and data arrays.) The RAID level you select will impact database performance, and what you want/need to optimize your database for (say fast read access, or large numbers of small writes, or small numbers of large writes, etc.) should impact which RAID level you elect for the array your database sits on. The SF "canonical" answer on RAID levels has more information (thanks again Hennes) on the advantages and disadvantages of the common/standard RAID levels, and should probably be a next stop for you.

HopelessN00b
  • 53,385
  • 32
  • 133
  • 208
  • 1
    I second the it as possible bad idea. RAID levels on a database should be carefully selected, depending in the queries the DB gets. If there are lots of small writes then RAID5 is a really bad idea. For more details on RAID levels see this serverfault post: http://serverfault.com/questions/339128/what-are-the-different-widely-used-raid-levels-and-when-should-i-consider-them – Hennes Jul 18 '12 at 20:43
  • I even overlooked the fact that the RAID level would impact on DB performance. Doi. I'll add that to my answer. Thanks. – HopelessN00b Jul 18 '12 at 20:50