Simple mdadm RAID 1 not activating spare

25

15

I had created two 2TB HDD partitions (/dev/sdb1 and /dev/sdc1) in a RAID 1 array called /dev/md0 using mdadm on Ubuntu 12.04 LTS Precise Pangolin.

The command sudo mdadm --detail /dev/md0 used to indicate both drives as active sync.

Then, for testing, I failed /dev/sdb1, removed it, then added it again with the command sudo mdadm /dev/md0 --add /dev/sdb1

watch cat /proc/mdstat showed a progress bar of the array rebuilding, but I wouldn't spend hours watching it, so I assumed that the software knew what it was doing.

After the progress bar was no longer showing, cat /proc/mdstat displays:

md0 : active raid1 sdb1[2](S) sdc1[1]
      1953511288 blocks super 1.2 [2/1] [U_]

And sudo mdadm --detail /dev/md0 shows:

/dev/md0:
        Version : 1.2
  Creation Time : Sun May 27 11:26:05 2012
     Raid Level : raid1
     Array Size : 1953511288 (1863.01 GiB 2000.40 GB)
  Used Dev Size : 1953511288 (1863.01 GiB 2000.40 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Mon May 28 11:16:49 2012
          State : clean, degraded 
 Active Devices : 1
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 1

           Name : Deltique:0  (local to host Deltique)
           UUID : 49733c26:dd5f67b5:13741fb7:c568bd04
         Events : 32365

    Number   Major   Minor   RaidDevice State
       1       8       33        0      active sync   /dev/sdc1
       1       0        0        1      removed

       2       8       17        -      spare   /dev/sdb1

I've been told that mdadm automatically replaces removed drives with spares, but /dev/sdb1 isn't being moved into the expected position, RaidDevice 1.


UPDATE (30 May 2012): A badblocks destructive read-write test of the entire /dev/sdb yielded no errors as expected; both HDDs are new.

As of the latest edit, I assembled the array with this command:

sudo mdadm --assemble --force --no-degraded /dev/md0 /dev/sdb1 /dev/sdc1

The output was:

mdadm: /dev/md0 has been started with 1 drive (out of 2) and 1 rebuilding.

Rebuilding looks like it's progressing normally:

md0 : active raid1 sdc1[1] sdb1[2]
      1953511288 blocks super 1.2 [2/1] [U_]
      [>....................]  recovery =  0.6% (13261504/1953511288) finish=2299.7min speed=14060K/sec

unused devices: <none>

I'm now waiting on this rebuild, but I'm expecting /dev/sdb1 to become a spare just like the five or six times that I've tried rebuilding before.


UPDATE (31 May 2012): Yeah, it's still a spare. Ugh!


UPDATE (01 June 2012): I'm trying Adrian Kelly's suggested command:

sudo mdadm --assemble --update=resync /dev/md0 /dev/sdb1 /dev/sdc1

Waiting on the rebuild now...


UPDATE (02 June 2012): Nope, still a spare...


UPDATE (04 June 2012): P.B. brought up a concern that I overlooked: perhaps /dev/sdc1 is encountering I/O errors. I hadn't bothered to check /dev/sdc1 because it appeared to be working just fine and it was brand new, but I/O errors towards the end of the drive is a rational possibility.

I bought these HDDs on sale, so it would be no surprise that one of them is already failing. Plus, neither of them have support for S.M.A.R.T., so no wonder they were so cheap...

Here is the data recovery procedure I just made up and am following:

  1. sudo mdadm /dev/md0 --fail /dev/sdb1 so that I can take out /dev/sdb1.
  2. sudo mdadm /dev/md0 --remove /dev/sdb1 to remove /dev/sdb1 from the array.
  3. /dev/sdc1 is mounted at /media/DtkBk
  4. Format /dev/sdb1 as ext4.
  5. Mount /dev/sdb1 to /media/DtkBkTemp.
  6. cd /media to work in that area.
  7. sudo chown deltik DtkBkTemp to give me (username deltik) rights to the partition.
  8. Do copy of all files and directories: sudo rsync -avzHXShP DtkBk/* DtkBkTemp

UPDATE (06 June 2012): I did a badblocks destructive write-mode test of /dev/sdc, following the following procedures:

  1. sudo umount /media/DtkBk to allow tearing down of the array.
  2. sudo mdadm --stop /dev/md0 to stop the array.
  3. sudo badblocks -w -p 1 /dev/sdc -s -v to wipe the suspect hard drive, and in the process, check for I/O errors. If there are I/O errors, that is not a good sign. Hopefully, I can get a refund...

I have now confirmed that there are no input/output issues on either HDD.

From all this investigating, my two original questions still stand.


My questions are:

  1. Why isn't the spare drive becoming active sync?
  2. How can I make the spare drive become active?

Deltik

Posted 2012-05-28T17:37:00.983

Reputation: 16 807

Answers

15

Doing this simply chucks the drive into the array without actually doing anything with it, i.e. it is a member of the array but not active in it. By default, this turns it into a spare:

sudo mdadm /dev/md0 --add /dev/sdb1

If you have a spare, you can grow it by forcing the active drive count for the array to grow. With 3 drives and 2 expected to be active, you would need to increase the active count to 3.

mdadm --grow /dev/md0 --raid-devices=3

The raid array driver will notice that you are "short" a drive, and then look for a spare. Finding the spare, it will integrate it into the array as an active drive. Open a spare terminal and let this rather crude command line run in it, to keep tabs on the re-sync progress. Be sure to type it as one line or use the line break (\) character, and once the rebuild finishes, just type Ctrl-C in the terminal.

while true; do sleep 60; clear; sudo mdadm --detail /dev/md0; echo; cat /proc/mdstat; done

Your array will now have two active drives that are in sync, but because there are not 3 drives, it will not be 100% clean. Remove the failed drive, then resize the array. Note that the --grow flag is a bit of a misnomer - it can mean either grow or shrink:

sudo mdadm /dev/md0 --fail /dev/{failed drive}
sudo mdadm /dev/md0 --remove /dev/{failed drive}
sudo mdadm --grow /dev/md0 --raid-devices=2

With regard to errors, a link problem with the drive (i.e. the PATA/SATA port, cable, or drive connector) is not enough to trigger a failover of a hot spare, as the kernel typically will switch to using the other "good" drive while it resets the link to the "bad" drive. I know this because I run a 3-drive array, 2 hot, 1 spare, and one of the drives just recently decided to barf up a bit in the logs. When I tested all the drives in the array, all 3 passed the "long" version of the SMART test, so it isn't a problem with the platters, mechanical components, or the onboard controller - which leaves a flaky link cable or a bad SATA port. Perhaps this is what you are seeing. Try switching the drive to a different motherboard port, or using a different cable, and see if it improves.


A follow-up: I completed my expansion of the mirror to 3 drives, failed and removed the flaky drive from the md array, hot-swapped the cable for a new one (the motherboard supports this) and re-added the drive. Upon re-add, it immediately started a re-sync of the drive. So far, not a single error has appeared in the log despite the drive being heavily used. So, yes, drive cables can go flaky.

Avery Payne

Posted 2012-05-28T17:37:00.983

Reputation: 2 371

As an update, this answer remains the most useful to most people, which is why I have accepted it, but what actually happened was that one of the drives in my RAID 1 array was bad, most likely /dev/sdc1 at the time because /dev/sdc1 was being read while /dev/sdb1 was being written, and bad sectors in /dev/sdb1 would have been transparently remapped during writing.

– Deltik – 2015-08-18T11:20:53.510

1To keep tabs on the resync process do watch -n 60 cat /proc/mdstat where 60 is the number of seconds between refreshes. – Erk – 2018-08-27T01:01:36.003

This command was what I needed mdadm --grow /dev/md127 --raid-devices=2. Also, I use watch -n 0.1 cat /proc/mdstat to watch the progress. – xdevs23 – 2020-02-05T19:35:38.890

Flaky link cable? I buy that explanation, but I can't test it out anymore because I re-purposed both drives months ago. I accept this answer as the best answer for my particular problem, but another great answer is this one.

– Deltik – 2013-01-13T22:42:02.413

8

I've had exactly the same problem, and in my case I've found out that the active raid disk suffered from read-errors during synchronization. Therefore the new disk was newer successfully synchronized and therefore was kept marked as spare.

You might want to check your /var/log/messages and other system logs for errors. Additionally, it might also be a good idea to check your disk's SMART status:
1) Run the short test:

"smartctl -t short /dev/sda"

2) Display the test results:

"smartctl -l selftest /dev/sda"

In my case this returned something like this:

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
1 Extended offline Completed: read failure 90% 7564 27134728
2 Short offline Completed: read failure 90% 7467 1408449701

I had to boot a live distro and manually copy the data from the defective disk to the new (currently "spare") one.

P.B.

Posted 2012-05-28T17:37:00.983

Reputation: 131

Aha! I didn't think to suspect the active drive for I/O errors. For some reason, S.M.A.R.T. is not supported on these HDDs. This and possible I/O errors on two brand new HDDs? I think I made a bad buy... Anyway, I'm taking data recovery procedures right now onto the HDD that I know is good. I'll update soon. – Deltik – 2012-06-04T19:58:27.173

+50 rep to you P.B.. Nobody was able to answer my questions correctly, but I figured that instead of wasting 50 reputation points to nothing, I'd give them to you as a welcome gift. Welcome to Stack Exchange!

– Deltik – 2012-06-07T05:41:36.253

3

I had exactly the same problem and always thought that my second disk, which I wanted to re-add to the array had errors. But it was my original disk had read errors.

You could check it with smartctl -t short /dev/sdX and see the results a few minutes later with smartctl -l selftest /dev/sdX. For me it looked like this:

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed: read failure       20%     25151         734566647

I tried to fix them with this manual . That was fun :-). I know you have checked both disks for errors, but I think your problem is, that the disk which is still in the md array has read errors, so adding a second disk fails.

Update

You should additional run a smartctl -a /dev/sdX If you see Current_Pending_Sector > 0 something is wrong

197 Current_Pending_Sector 0x0012 098 098 000 Old_age Always - 69

For me it was definitely the problem that I removed a disk from raid just for testing and resyncing could not be done because of read failures. The sync aborted half the way. When I checked my disk which was still in the raid array smartctl reported problems.

I could fix them with the manual above and saw the number of pending sectors reduced. But there were to many and it is a long and boring procedure so I used my backup and restored the data on a different server.

As you didn't had the opportunity to use SMART, I guess your self test did not show up those broken sectors.

For me it is a lesson learned: Check your disks before you remove one from your array.

Janning

Posted 2012-05-28T17:37:00.983

Reputation: 145

This is no longer the accepted answer, but it is still a good answer and may help someone else. This answer is the most applicable to me, but this answer is probably the most applicable to other people. Also, I take back what I said about RAID in this comment.

– Deltik – 2015-07-19T19:08:41.300

By the time you answered, the RAID 1 array ceased to exist and both drives were found to have no I/O errors. Can you verify that your answer is applicable? – Deltik – 2012-06-14T17:44:45.333

Finally accepted. This answer is the most likely to help future visitors. Me, I gave up on RAID in general. It's not like I own a datacenter. – Deltik – 2012-09-09T18:38:13.007

3

I have had a similar issue and fixed it by growing the RAID array amount of disks from 1 to 2.

mdadm --grow --raid-devices=2 /dev/md1

Shaun

Posted 2012-05-28T17:37:00.983

Reputation: 31

3

UPDATE (24 May 2015): After three years, I investigated the true cause of the RAID 1 array being degraded.

tl;dr: One of the drives was bad, and I didn't notice this because I had only run a full surface test on the good drive.

Three years ago, I didn't think to check any logs about I/O issues. Had I thought to check /var/log/syslog, I would've seen something like this when mdadm gave up on rebuilding the array:

May 24 14:08:32 node51 kernel: [51887.853786] sd 8:0:0:0: [sdi] Unhandled sense code
May 24 14:08:32 node51 kernel: [51887.853794] sd 8:0:0:0: [sdi]
May 24 14:08:32 node51 kernel: [51887.853798] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
May 24 14:08:32 node51 kernel: [51887.853802] sd 8:0:0:0: [sdi]
May 24 14:08:32 node51 kernel: [51887.853805] Sense Key : Medium Error [current]
May 24 14:08:32 node51 kernel: [51887.853812] sd 8:0:0:0: [sdi]
May 24 14:08:32 node51 kernel: [51887.853815] Add. Sense: Unrecovered read error
May 24 14:08:32 node51 kernel: [51887.853819] sd 8:0:0:0: [sdi] CDB:
May 24 14:08:32 node51 kernel: [51887.853822] Read(10): 28 00 00 1b 6e 00 00 00 01 00
May 24 14:08:32 node51 kernel: [51887.853836] end_request: critical medium error, dev sdi, sector 14381056
May 24 14:08:32 node51 kernel: [51887.853849] Buffer I/O error on device sdi, logical block 1797632

To get that output in the log, I seeked to the first problematic LBA (14381058, in my case) with this command:

root@node51 [~]# dd if=/dev/sdi of=/dev/zero bs=512 count=1 skip=14381058
dd: error reading ‘/dev/sdi’: Input/output error
0+0 records in
0+0 records out
0 bytes (0 B) copied, 7.49287 s, 0.0 kB/s

No wonder md gave up! It can't rebuild an array from a bad drive.

New technology (better smartmontools hardware compatibility?) has allowed me to get S.M.A.R.T. information out of the drive, including the last five errors (of 1393 errors so far):

root@node51 [~]# smartctl -a /dev/sdi
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.13.0-43-generic] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Hitachi Deskstar 5K3000
Device Model:     Hitachi HDS5C3020ALA632
Serial Number:    ML2220FA040K9E
LU WWN Device Id: 5 000cca 36ac1d394
Firmware Version: ML6OA800
User Capacity:    2,000,398,934,016 bytes [2.00 TB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    5940 rpm
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ATA8-ACS T13/1699-D revision 4
SATA Version is:  SATA 2.6, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:    Sun May 24 14:13:35 2015 CDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART STATUS RETURN: incomplete response, ATA output registers missing
SMART overall-health self-assessment test result: PASSED
Warning: This result is based on an Attribute check.

General SMART Values:
Offline data collection status:  (0x84) Offline data collection activity
                                        was suspended by an interrupting command from host.
                                        Auto Offline Data Collection: Enabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever
                                        been run.
Total time to complete Offline
data collection:                (21438) seconds.
Offline data collection
capabilities:                    (0x5b) SMART execute Offline immediate.
                                        Auto Offline data collection on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        No Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   1) minutes.
Extended self-test routine
recommended polling time:        ( 358) minutes.
SCT capabilities:              (0x003d) SCT Status supported.
                                        SCT Error Recovery Control supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000b   100   100   016    Pre-fail  Always       -       0
  2 Throughput_Performance  0x0005   136   136   054    Pre-fail  Offline      -       93
  3 Spin_Up_Time            0x0007   172   172   024    Pre-fail  Always       -       277 (Average 362)
  4 Start_Stop_Count        0x0012   100   100   000    Old_age   Always       -       174
  5 Reallocated_Sector_Ct   0x0033   100   100   005    Pre-fail  Always       -       8
  7 Seek_Error_Rate         0x000b   100   100   067    Pre-fail  Always       -       0
  8 Seek_Time_Performance   0x0005   146   146   020    Pre-fail  Offline      -       29
  9 Power_On_Hours          0x0012   097   097   000    Old_age   Always       -       22419
 10 Spin_Retry_Count        0x0013   100   100   060    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       161
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       900
193 Load_Cycle_Count        0x0012   100   100   000    Old_age   Always       -       900
194 Temperature_Celsius     0x0002   127   127   000    Old_age   Always       -       47 (Min/Max 19/60)
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       8
197 Current_Pending_Sector  0x0022   100   100   000    Old_age   Always       -       30
198 Offline_Uncorrectable   0x0008   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x000a   200   200   000    Old_age   Always       -       2

SMART Error Log Version: 1
ATA Error Count: 1393 (device log contains only the most recent five errors)
        CR = Command Register [HEX]
        FR = Features Register [HEX]
        SC = Sector Count Register [HEX]
        SN = Sector Number Register [HEX]
        CL = Cylinder Low Register [HEX]
        CH = Cylinder High Register [HEX]
        DH = Device/Head Register [HEX]
        DC = Device Command Register [HEX]
        ER = Error register [HEX]
        ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.

Error 1393 occurred at disk power-on lifetime: 22419 hours (934 days + 3 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 06 02 70 db 00  Error: UNC 6 sectors at LBA = 0x00db7002 = 14381058

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  25 00 08 00 70 db 40 00   1d+03:59:34.096  READ DMA EXT
  25 00 08 00 70 db 40 00   1d+03:59:30.334  READ DMA EXT
  b0 d5 01 09 4f c2 00 00   1d+03:57:59.057  SMART READ LOG
  b0 d5 01 06 4f c2 00 00   1d+03:57:58.766  SMART READ LOG
  b0 d5 01 01 4f c2 00 00   1d+03:57:58.476  SMART READ LOG

Error 1392 occurred at disk power-on lifetime: 22419 hours (934 days + 3 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 06 02 70 db 00  Error: UNC 6 sectors at LBA = 0x00db7002 = 14381058

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  25 00 08 00 70 db 40 00   1d+03:59:30.334  READ DMA EXT
  b0 d5 01 09 4f c2 00 00   1d+03:57:59.057  SMART READ LOG
  b0 d5 01 06 4f c2 00 00   1d+03:57:58.766  SMART READ LOG
  b0 d5 01 01 4f c2 00 00   1d+03:57:58.476  SMART READ LOG
  b0 d5 01 00 4f c2 00 00   1d+03:57:58.475  SMART READ LOG

Error 1391 occurred at disk power-on lifetime: 22419 hours (934 days + 3 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 06 02 70 db 00  Error: UNC 6 sectors at LBA = 0x00db7002 = 14381058

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  25 00 08 00 70 db 40 00   1d+03:56:28.228  READ DMA EXT
  25 00 08 00 70 db 40 00   1d+03:56:24.549  READ DMA EXT
  25 00 08 00 70 db 40 00   1d+03:56:06.711  READ DMA EXT
  25 00 10 f0 71 db 40 00   1d+03:56:06.711  READ DMA EXT
  25 00 f0 00 71 db 40 00   1d+03:56:06.710  READ DMA EXT

Error 1390 occurred at disk power-on lifetime: 22419 hours (934 days + 3 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 06 02 70 db 00  Error: UNC 6 sectors at LBA = 0x00db7002 = 14381058

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  25 00 08 00 70 db 40 00   1d+03:56:24.549  READ DMA EXT
  25 00 08 00 70 db 40 00   1d+03:56:06.711  READ DMA EXT
  25 00 10 f0 71 db 40 00   1d+03:56:06.711  READ DMA EXT
  25 00 f0 00 71 db 40 00   1d+03:56:06.710  READ DMA EXT
  25 00 10 f0 70 db 40 00   1d+03:56:06.687  READ DMA EXT

Error 1389 occurred at disk power-on lifetime: 22419 hours (934 days + 3 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 06 02 70 db 00  Error: UNC 6 sectors at LBA = 0x00db7002 = 14381058

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  25 00 08 00 70 db 40 00   1d+03:56:06.711  READ DMA EXT
  25 00 10 f0 71 db 40 00   1d+03:56:06.711  READ DMA EXT
  25 00 f0 00 71 db 40 00   1d+03:56:06.710  READ DMA EXT
  25 00 10 f0 70 db 40 00   1d+03:56:06.687  READ DMA EXT
  25 00 f0 00 70 db 40 00   1d+03:56:03.026  READ DMA EXT

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed: read failure       90%     21249         14381058

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

Ahh… that'd do it.

Now, I've solved this question in three easy steps:

  1. Become a system administrator in three years.
  2. Check the logs.
  3. Come back to Super User and laugh at my approach from three years ago.

UPDATE (19 July 2015): For anyone who's curious, the drive finally ran out of sectors to remap:

root@node51 [~]# smartctl -a /dev/sdg
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.13.0-43-generic] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Hitachi Deskstar 5K3000
Device Model:     Hitachi HDS5C3020ALA632
Serial Number:    ML2220FA040K9E
LU WWN Device Id: 5 000cca 36ac1d394
Firmware Version: ML6OA800
User Capacity:    2,000,398,934,016 bytes [2.00 TB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    5940 rpm
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ATA8-ACS T13/1699-D revision 4
SATA Version is:  SATA 2.6, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:    Sun Jul 19 14:00:33 2015 CDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART STATUS RETURN: incomplete response, ATA output registers missing
SMART overall-health self-assessment test result: FAILED!
Drive failure expected in less than 24 hours. SAVE ALL DATA.
See vendor-specific Attribute list for failed Attributes.

General SMART Values:
Offline data collection status:  (0x85) Offline data collection activity
                                        was aborted by an interrupting command from host.
                                        Auto Offline Data Collection: Enabled.
Self-test execution status:      ( 117) The previous self-test completed having
                                        the read element of the test failed.
Total time to complete Offline
data collection:                (21438) seconds.
Offline data collection
capabilities:                    (0x5b) SMART execute Offline immediate.
                                        Auto Offline data collection on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        No Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   1) minutes.
Extended self-test routine
recommended polling time:        ( 358) minutes.
SCT capabilities:              (0x003d) SCT Status supported.
                                        SCT Error Recovery Control supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000b   099   099   016    Pre-fail  Always       -       2
  2 Throughput_Performance  0x0005   136   136   054    Pre-fail  Offline      -       93
  3 Spin_Up_Time            0x0007   163   163   024    Pre-fail  Always       -       318 (Average 355)
  4 Start_Stop_Count        0x0012   100   100   000    Old_age   Always       -       181
  5 Reallocated_Sector_Ct   0x0033   001   001   005    Pre-fail  Always   FAILING_NOW 1978
  7 Seek_Error_Rate         0x000b   086   086   067    Pre-fail  Always       -       1245192
  8 Seek_Time_Performance   0x0005   146   146   020    Pre-fail  Offline      -       29
  9 Power_On_Hours          0x0012   097   097   000    Old_age   Always       -       23763
 10 Spin_Retry_Count        0x0013   100   100   060    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       167
192 Power-Off_Retract_Count 0x0032   092   092   000    Old_age   Always       -       10251
193 Load_Cycle_Count        0x0012   092   092   000    Old_age   Always       -       10251
194 Temperature_Celsius     0x0002   111   111   000    Old_age   Always       -       54 (Min/Max 19/63)
196 Reallocated_Event_Count 0x0032   001   001   000    Old_age   Always       -       2927
197 Current_Pending_Sector  0x0022   100   100   000    Old_age   Always       -       33
198 Offline_Uncorrectable   0x0008   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x000a   200   200   000    Old_age   Always       -       2

SMART Error Log Version: 1
ATA Error Count: 2240 (device log contains only the most recent five errors)
        CR = Command Register [HEX]
        FR = Features Register [HEX]
        SC = Sector Count Register [HEX]
        SN = Sector Number Register [HEX]
        CL = Cylinder Low Register [HEX]
        CH = Cylinder High Register [HEX]
        DH = Device/Head Register [HEX]
        DC = Device Command Register [HEX]
        ER = Error register [HEX]
        ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.

Error 2240 occurred at disk power-on lifetime: 23763 hours (990 days + 3 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  10 51 f0 18 0f 2f 00  Error: IDNF 240 sectors at LBA = 0x002f0f18 = 3084056

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  35 00 f0 18 0f 2f 40 00      00:25:01.942  WRITE DMA EXT
  35 00 f0 28 0e 2f 40 00      00:25:01.168  WRITE DMA EXT
  35 00 f0 38 0d 2f 40 00      00:25:01.157  WRITE DMA EXT
  35 00 f0 48 0c 2f 40 00      00:25:01.147  WRITE DMA EXT
  35 00 f0 58 0b 2f 40 00      00:25:01.136  WRITE DMA EXT

Error 2239 occurred at disk power-on lifetime: 23763 hours (990 days + 3 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  10 51 5a 4e f7 2e 00  Error: IDNF 90 sectors at LBA = 0x002ef74e = 3077966

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  35 00 f0 b8 f6 2e 40 00      00:24:57.967  WRITE DMA EXT
  35 00 f0 c8 f5 2e 40 00      00:24:57.956  WRITE DMA EXT
  35 00 f0 d8 f4 2e 40 00      00:24:57.945  WRITE DMA EXT
  35 00 f0 e8 f3 2e 40 00      00:24:57.934  WRITE DMA EXT
  35 00 f0 f8 f2 2e 40 00      00:24:57.924  WRITE DMA EXT

Error 2238 occurred at disk power-on lifetime: 23763 hours (990 days + 3 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  10 51 40 a8 c6 2e 00  Error: IDNF 64 sectors at LBA = 0x002ec6a8 = 3065512

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  35 00 f0 f8 c5 2e 40 00      00:24:49.444  WRITE DMA EXT
  35 00 f0 08 c5 2e 40 00      00:24:49.433  WRITE DMA EXT
  35 00 f0 18 c4 2e 40 00      00:24:49.422  WRITE DMA EXT
  35 00 f0 28 c3 2e 40 00      00:24:49.412  WRITE DMA EXT
  35 00 f0 38 c2 2e 40 00      00:24:49.401  WRITE DMA EXT

Error 2237 occurred at disk power-on lifetime: 23763 hours (990 days + 3 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  10 51 ea be ba 2e 00  Error: IDNF 234 sectors at LBA = 0x002ebabe = 3062462

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  35 00 f0 b8 ba 2e 40 00      00:24:39.263  WRITE DMA EXT
  35 00 f0 c8 b9 2e 40 00      00:24:38.885  WRITE DMA EXT
  35 00 f0 d8 b8 2e 40 00      00:24:38.874  WRITE DMA EXT
  35 00 f0 e8 b7 2e 40 00      00:24:38.862  WRITE DMA EXT
  35 00 f0 f8 b6 2e 40 00      00:24:38.852  WRITE DMA EXT

Error 2236 occurred at disk power-on lifetime: 23763 hours (990 days + 3 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  10 51 86 c2 2a 2e 00  Error: IDNF 134 sectors at LBA = 0x002e2ac2 = 3025602

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  35 00 f0 58 2a 2e 40 00      00:24:25.605  WRITE DMA EXT
  35 00 f0 68 29 2e 40 00      00:24:25.594  WRITE DMA EXT
  35 00 f0 78 28 2e 40 00      00:24:25.583  WRITE DMA EXT
  35 00 f0 88 27 2e 40 00      00:24:25.572  WRITE DMA EXT
  35 00 f0 98 26 2e 40 00      00:24:25.561  WRITE DMA EXT

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short captive       Completed: read failure       50%     23763         869280
# 2  Extended offline    Completed without error       00%     22451         -
# 3  Short offline       Completed without error       00%     22439         -
# 4  Extended offline    Completed: read failure       90%     21249         14381058
1 of 2 failed self-tests are outdated by newer successful extended offline self-test # 2

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

Deltik

Posted 2012-05-28T17:37:00.983

Reputation: 16 807

1yep, exactly what just happened to my RAID! This is the actual answer to your own question! Thanks for keeping this up to date!!! – Preexo – 2015-08-18T11:07:05.863

1

In my case, it was bad source disk too. Although it looked at the time like it wasn't (the /proc/mdstat progressed above 99.9% normally - but it actually failed at 99.97% which concided with when regular sync would finish). So you need to check dmesg(1) output -- it will tell you if there are any read errors.

You can see details of my case in Debian bug #767243. I finally managed to finish the sync by force-overwriting few bad sectors on source disk (which were luckilly unusued in my case, otherwise there would have been data loss)

Matija Nalis

Posted 2012-05-28T17:37:00.983

Reputation: 2 107

0

You could try

sudo mdadm --assemble --update=resync /dev/md0 /dev/sdb1 /dev/sdc1

to update the drives and resync them.

orangeocelot

Posted 2012-05-28T17:37:00.983

Reputation: 158

Trying this now... I'll report back when the rebuild supposedly completes. – Deltik – 2012-06-01T15:18:24.750

Didn't work. /dev/sdb1 is still not becoming "active" after it rebuilt as a spare. – Deltik – 2012-06-03T04:04:58.010

0

Not sure if it will work since you already --added the disk but --re-add appears to be the option you need.

Or perhaps you need to --grow the device to 2 active disks, mdadm --grow -n 2? Not tested so be careful.

Bram

Posted 2012-05-28T17:37:00.983

Reputation: 582

sudo mdadm --grow -n 2 was one of the first things I did, hence that's why sudo mdadm --detail /dev/md0 shows two slots. Sorry, it doesn't work. – Deltik – 2012-06-04T19:44:33.297

0

I would recommend removing sdc1, zeroing the super block on sdc1 and then re-adding it.

mdadm /dev/md0 -r /dev/sdc1
mdadm --zero-superblock /dev/sdc1
mdadm /dev/md0 -a /dev/sdc1

Bruno9779

Posted 2012-05-28T17:37:00.983

Reputation: 1 225

I've shifted my data onto each HDD while I zeroed the superblock on the other HDD. The issue I'm having recurs even with complete recreation of the RAID 1 array. – Deltik – 2012-06-05T19:30:09.247