9

I'm experimenting with various RAID configurations of a new Adaptec 6805 (not the 'E' model) Attached are 8x Hitachi 5400RPM 2TB SATA 6Gb Drives using the miniSAS to SATA cables provided.

I created a RAID6 array comprising four disks, two from each connector. Then I tried expanding the array to the remainder of the disks.

My problem is that raid reconfiguration never seems to progress. After more than 24 hours the Adaptec Storage Manager still shows that it's at 0% completion. I'm not actively using the logical array - so I can't imagine that should be a signficant factor.

  • Is this normal for an online reconfiguration?
  • Is this indicative of the amount of time it would take to rebuild the array should a drive fail?
  • Is it possible to force an offline reconfiguration of the array? Assuming that would speed things up.

Edit:

Thanks for the responses. Just to clarify - there's no load on the drives, and very little data (~5GB). After about 35 hours it finally showed about 1% - so, I gave up and deleted the array.

My main concern really is: If it takes ~35 hours to do 1%, extrapolating that out means it'd be somewhere around 4-5 months to do a rebuild if a drive failed.

I'll test this out by unplugging a drive and seeing how long it takes to rebuild.

peterh
  • 4,914
  • 13
  • 29
  • 44

7 Answers7

5

I have an Adaptec 5405Z with 20x2TB 7200 RPM drives on a SAS Backplane. I attempted to do a reconfig on it to go from 8 drives to 20 drives. We use it for Security Video Storage. Since the box was essentially brand new, I figured why not see how long it woudl take with ~2TB of data on the array. After about a week and it only getting to 10%, I gave up. Backed it up, wiped it and started over. I tried the same test with RAID 10 and RAID 5, both seemed to crawl, but would have eventually finished. Though in truth, I am certain my DVR software was slowing down the reconfig by constantly writing to the disk. Did you factor additional load on the drives into the rebuild?

Obviously I have a few more drives than you, but for whatever reason, the reconfig seems to drag on for quite some time. When I tried to get Adaptec to answer the question, they weren't much help as the answer seemed to be "it depends on your configuration and how much data is on the array".

MikeAWood
  • 2,566
  • 1
  • 12
  • 13
  • 1
    There's no load on the drives, and aside from some test data (5GB or so). After about 35 hours or so it finally ticked over to 1%. I gave up, and deleted/recreated the array. –  Aug 04 '11 at 03:48
3
  • Normal is relative. It should take some hours to reconfigure online. The exact time should be the size of the disc divided by the throughput of a single drive plus some small overhead. With active usage the small overhead would transform to a bigger overhead. But 24 hours with no progress is too long.
  • Yes you can estimate that this is the exact time needed in case of a recovery.
  • It should be possible. From within the controller's configuration setup during boot up.
mailq
  • 16,882
  • 2
  • 36
  • 66
3

The 0% bit is a bit odd but if the array was completely empty I'd expect the reconfiguration to be done quite quickly, certainly under an hour. That said if there's data on it then this kind of thing can take days, and those disks are big and slow so that won't help either. And that is indicative of how long rebuilds can take, it's why sysadmins don't like to use R5 or R6 when using large, slow disks - rebuilds take ages and expose you to risk while they do so.

If you have no data on the array or you can quickly move it to another drive then consider destroying the array and rebuilding it as you need. Personally I'd have bought 3TB drives and R10's them but that's your call.

Chopper3
  • 100,240
  • 9
  • 106
  • 238
  • Thanks - There's a small amount of data (a few GB, test data for doing some basic IO throughput tests). I was trying to see what it would be like to do online capacity expansion at a later date. –  Aug 03 '11 at 13:32
2

Adaptec 5805 (with BBU) here and attempting to upsize an 8 disk RAID5 array. First we replaced all 2TB disks with 3TB (nearline SAS drives), where every rebuild (the array degraded every time we replaced one disc of course) took about 3 days.

Now after all discs are replaced we started the upsizing from the ~13TB LUN to the new ~19TB. Well, after a week we have completed ~10% of this task.

voretaq7
  • 79,345
  • 17
  • 128
  • 213
Christoph
  • 21
  • 1
1

This seems long - I have 12x 750GB on a 5805Z (BBU), RAID 6, home server. When I lost one drive, it took ~4 hours to rebuild. During that rebuild, another drive died (that's why I would NEVER use RAID 5 - disks always fail under heavy load - like a rebuild / OCE). That one also took about 4 hours.

I replaced the 4 750's with 3TB drives, and created a 2nd RAID6 array on the additional space (i.e. 4x ~2TB). Just last week, I replaced another 750GB drive with a 3TB, and expanded the second array (i.e. to 5x 2TB) - it took just under a week (from 4TB->6TB).

So you are upsizing the entire array at once? Mine seemed to be going at adding 2TB/week. Yours would extrapolate to 3 * (2TB/week) or 3 weeks total, 33% / week, but I don't know if the math scales like that. It would have been much faster to just define a second array in the additional space...

Toyzrme
  • 11
  • 1
1

To add my recent experience with Online Capacity Expansion speed:

  • Adaptec 7805Q, maxCache disabled (must be)
  • RAID 1 consisting of 2x WD Gold 6 TB
  • original array size 3 TB, expanding to 6 TB

The online capacity expansion took ~36 hours. The existing 3 TB partition was almost full. Booted into Windows Server 2016 R2, but no load on the drives.

Ondrej Tucny
  • 404
  • 1
  • 7
  • 25
1

Providing my 2¢ here to those interested...

Since 2009, I have been using HGST 2TB HDDs in a RAID6 configuration on a Highpoint RocketRAID 4321 (IOP348 1.2GHz) 3Gb/s SAS controller connected via SFF-8088 to a 15-bay storage tower equipped with an Acera ARC-8020-16 3Gb/s SAS expander. It was expanded to its maximum capacity by 2012 for 26TB of error-free, fast file transfers. To this day, I still have all the original HDDs, providing a mix of storage content with mostly media content recently and ensuring that the most important content is stored on my 2TB Dropbox account...

This year, after deciding to stay with HGST HDDs, I acquired my first 10TB HDD (10.0TB HGST Deskstar NAS 3.5-inch SATA 6.0Gb/s 7200RPM - HDN721010ALE604) and made it my primary Plex media server storage for several months...

After filling the single HDD rather quickly, I came across a lot of 5x Adaptec 6805T SAS RAID controllers with AFM-600 modules (4G NAND ZMM) for $25 each. So, I decided to plan an upgrade for the Plex media server with RAID. But first, I wanted to test the RAID controllers in my lab ESXi 6.5 server [dual Xeon x5690 3.46 GHz (24 threads) and 96GB RAM on an eVGA SR-2 Classified motherboard based on the Intel 5520 chipset]. After upgrading and testing each controller with the latest build 19204 from August 2017, I was successful in configuring the controller along with the maxView Storage Manager in the VMware server using VMware 6.0 driver v1.2.1-52040_cert and VMware 6.0 maxView Storage Manager v2.05_22932.

I did some thorough research on using the 6805T with the 10.0TB HGST Deskstar NAS 3.5-inch SATA 6.0Gb/s 7200RPM (HDN721010ALE604) HDD, but could only find that the HGST UltraStar line of HDDs were tested with the controller. Furthermore, I found the Ask Adaptec article "Support for SATA and SAS disk drives with a size of 2TB or greater" where the Series 6, 6E, 6T, 6Q controllers were generally tested OK with 10TB HDDs and stated generally that 12TB HDDs were supported and 14TB HDDs were not tested. So, I went forward with my plan.

First, I purchased four of the 10TB HDDs on sale for $299.75 each from MacSales.com (a very good buy at the time, knocking $30 off the average price). After delivery, I started with a RAID 5 build that took 46 hours to complete:

Building/Verifying 4x 10TB HDDs RAID5 
Start: Sat/20181103 20:51:26
Complete: Mon/20181105 18:53:54

Then, I copied all the contents of my single HDD and added it to the controller to start a RAID 5 to RAID 6 migration consisting of 5x HDDs providing 30TBs of storage. It's in progress and taking much longer:

Reconfiguring (Migration/Expansion: RAID6 after adding Connector 0: Device 3 (very first 10TB HDD used as a single drive for weeks…)
Start: Thu/20181108 16:49
20%: Sat/20181110 17:00
27%: Sun/20181111 17:00
34%: Mon/20181112 17:00
42%: Tue/20181113 17:00
48%: Wed/20181114 17:00

I'm anticipating the process will take a total of a little less than two weeks. I will make sure to provide an addendum to this post upon its completion.

I have purchased three more HDDs at the special price in anticipation to expand the RAID 6 array. I will report its results as an addendum to this post, as well.

I recently picked up a used Acera ARC-8026-24 6Gb/s SAS expander from eBay for $150 (fully verified operational with my 15-bay storage tower) that I will be testing with the 6805T later. Those results in a followup addendum, as well.

UPDATE 20181123: The 30 TB 4x 10TB RAID5 to 5x RAID6 reconfiguration took a little longer than expected by the early accounts of the progress. Instead of a little less than two weeks, the overall reconfiguration took 15 days, 1 hour, and 34 minutes:

Start: Thu/20181108 16:49
20%: Sat/20181110 17:00
27%: Sun/20181111 17:00
34%: Mon/20181112 17:00
42%: Tue/20181113 17:00
48%: Wed/20181114 17:00
54%: Thu/20181115 17:00
60%: Fri/20181116 17:00
67%: Sat/20181117 17:00
73%: Sun/20181118 17:00
79%: Mon/20181119 17:00
85%: Tue/20181120 17:00
90%: Wed/20181121 17:00
94%: Thu/20181122 17:00
99%: Fri/20181123 17:00
100%: Fri/20181123 18:34

I'd like to note that I had been performing some read/writes with the array, as well as a few server re-boots during the RAID array's reconfiguration.

During the reconfiguration, the read/write performance was greatly degraded to 15-25 MBps. But, after the reconfiguration, I was getting 500 MBps reads and 300MBps writes, tested with an SSD to ensure the RAID 6 performance was maxed out. I even doubted that the RAID6 system was maxed out according to the Windows 10 Task Manager - Performance tab readings never going over 97%.

Also, I created a 10TB 2x 10TB RAID1 on the same Adaptec 6805T controller (where the Quick Initialization was only a few seconds while the RAID5-to-RAID6 reconfiguration was in progress) and tested throughput by copying from the RAID1 array to the RAID6 array. The 500GBs of data being moved is showing a bit over 150MBps with the RAID1 showing a bit more taxed than the RAID6 according to Windows 10 Task Manager - Performance tab, the RAID6 array averaging 75% and the RAID1 array averaging 95%. (RAID6: 5x 10TB HDDs on Connector 0, Device 3 and Connector 1, Devices 0-3; RAID1: 2x 10TB HDDs on Connector 0, Devices 1-2).

My next step is to determine how long it will take to perform a RAID6 array expansion from 30TBs (5x 10TB HDDs) to 40TB (6x 10TB HDDs).