0

We have a computing cluster with a relatively large storage system with RAID-6. The are 22 disks in this unit and each disk is 3TB (almost 20*3=60 TB). Recently many of the drives failed and rebuild was impossible. I Deleted the unit and recreated it with RAID-6 and stripe size of 256k. After 3ware was done with creation and initialization (which too about a day), I skipped the partitioning and used mkfs.xfs on the whole device. I executed the following

mkfs.xfs -d sunit=512 -d swidth=10240 /dev/sdb

However, its been about 5 hours and there is no progress report from mkfs.xfs. Does anybody have any experience regarding mkfs.xfs on large volumes? Is it expected to take this long?

Thanks

  • Try to format disk without additional parameters and check smart attributes of each disk in RAID array. – Mikhail Khirgiy Sep 09 '16 at 05:24
  • That's an awful lot of disks to be in a single array. If you're not too far along in the process, you should probably reconfigure as RAID60 (if your controller supports it) or two RAID6 arrays of 11 disks each, and then stripe in your OS. – longneck Sep 09 '16 at 19:05
  • This is the original factory setting. Are you saying that this might cause an issue in the future? – Lawless Sep 09 '16 at 19:08

1 Answers1

0

I managed to resolve the issue. There were two problems: 1. There was a corrupted disk in the array (SMART failure) 2. The write cache of the unit was off