0

so I have a raid 6 array in my server. Until yesterday, it comprised 11 devices each the size of 1TB. These devices have been running in a raid 6 array. Yesterday i tried to extend the array:

Added new device (being /dev/sdb), partitioned (resulting in /dev/sdb1) and dtried to add sdb1 to the array:

mdadm --add /dev/md0 /dev/sdb1
mdadm --grow --raid-devices=12 /dev/md0

This triggerd a reshape process. Fine. I thought.

Somewhere during the reshape, an error occured with the newly added disk. The reshape came to a halt.

So i wanted to roll back the thing:

mdadm --fail /dev/md0 /dev/sdb1
mdadm --remove /dev/md0 /dev/sdb1

How can i get back to the old state? I tried

mdadm --grow --raid-devices=11 /dev/md0

which reported an issue with the size. Trying

mdadm --grow /dev/md0 --size=max
mdadm --grow --raid-devices=11 /dev/md0

triggered a reshape, but afterwards i still have a degraded array:

mdadm  --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Thu Mar  5 09:13:23 2015
     Raid Level : raid6
     Array Size : 9760983040 (9308.80 GiB 9995.25 GB)
  Used Dev Size : 976098304 (930.88 GiB 999.52 GB)
   Raid Devices : 12
  Total Devices : 11
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Wed Aug 16 11:12:17 2017
          State : clean, degraded
 Active Devices : 11
Working Devices : 11
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K 

           Name : Eldorado:0  (local to host Eldorado)
           UUID : 87b249b0:9a11effc:0d6e8524:0d0ddeb2
         Events : 732555

Number   Major   Minor   RaidDevice State
  12       8       97        0      active sync   /dev/sdg1
  13       8      113        1      active sync   /dev/sdh1
   2       8       33        2      active sync   /dev/sdc1
   3       8        1        3      active sync   /dev/sda1
   4       8      177        4      active sync   /dev/sdl1
   5       8      145        5      active sync   /dev/sdj1
   9       8       48        6      active sync   /dev/sdd
  16       8      128        7      active sync   /dev/sdi
  14       8       81        8      active sync   /dev/sdf1
  15       8      160        9      active sync   /dev/sdk
  11       8      193       10      active sync   /dev/sdm1
  22       0        0       22      removed

So how can i get back to a clean 11-device raid6?

0 Answers0