8

I just checked into my RAID array this morning and what I got is:

$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md1 : active raid1 sdc7[0]
      238340224 blocks [2/1] [U_]

md0 : active raid1 sdc6[0]
      244139648 blocks [2/1] [U_]

md127 : active raid1 sdc3[0]
      390628416 blocks [2/1] [U_]

unused devices: <none>
$

Which, I believe, means that one disk of my array(s) is dead, is this true?

How do I do proper troubleshooting going forward? My /etc/mdadm/mdadm.conf looks like:

$ cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md127 UUID=124cd4a5:2965955f:cd707cc0:bc3f8165
ARRAY /dev/md0 UUID=91e560f1:4e51d8eb:cd707cc0:bc3f8165
ARRAY /dev/md1 UUID=0abe503f:401d8d09:cd707cc0:bc3f8165

How do I find out which physical drive is broken and needs to be replaced?

Thanks

EDIT1

# mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Tue Sep  1 19:15:33 2009
     Raid Level : raid1
     Array Size : 244139648 (232.83 GiB 250.00 GB)
  Used Dev Size : 244139648 (232.83 GiB 250.00 GB)
   Raid Devices : 2
  Total Devices : 1
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Mon Sep 21 07:11:24 2015
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           UUID : 91e560f1:4e51d8eb:cd707cc0:bc3f8165
         Events : 0.76017

    Number   Major   Minor   RaidDevice State
       0       8       38        0      active sync   /dev/sdc6
       1       0        0        1      removed
root@regDesktopHome:~# 

Why would it say Failed Devices : 0?

EDIT2
Opening Gparted, I can look at both, /dev/sdb & /dev/sdc which were my two RAID drives. However, mdadm thinks /dev.sdb has been removed for some reason... that's odd. I tried to mount a partition on ``/dev/sdb` and got the following

$sudo mount /dev/sdb7 test
[sudo] password for ron: 
mount: unknown filesystem type 'linux_raid_member'

which looks all coorect. How do I get my RAID array back in order?

EDIT 3

I ran smartctl -a /dev/sdc and smartctl -a /dev/sdb, I also did badblocks /dev/sdc and badblocks /dev/sdb and while sdc seems 100% clean, sdb returned some bad blocks:

# badblocks /dev/sdb
16130668
16130669
16130670
16130671

Would that potentially be the cause for the fault I'm seeing? Any way to repair/ignore these bad blocks or should I replace the drive instead?

EDIT 4

# smartctl --all /dev/sdb
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.13.0-62-generic] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Seagate Barracuda 7200.12
Device Model:     ST31000528AS
Serial Number:    6VP0308B
LU WWN Device Id: 5 000c50 013d3ae45
Firmware Version: CC34
User Capacity:    1,000,204,886,016 bytes [1.00 TB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    7200 rpm
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ATA8-ACS T13/1699-D revision 4
SATA Version is:  SATA 2.6, 3.0 Gb/s
Local Time is:    Sat Sep 26 11:35:02 2015 PDT

==> WARNING: A firmware update for this drive may be available,
see the following Seagate web pages:
http://knowledge.seagate.com/articles/en_US/FAQ/207931en
http://knowledge.seagate.com/articles/en_US/FAQ/213891en

SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x82) Offline data collection activity
                                        was completed without error.
                                        Auto Offline Data Collection: Enabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever 
                                        been run.
Total time to complete Offline 
data collection:                (  600) seconds.
Offline data collection
capabilities:                    (0x7b) SMART execute Offline immediate.
                                        Auto Offline data collection on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine 
recommended polling time:        (   1) minutes.
Extended self-test routine
recommended polling time:        ( 195) minutes.
Conveyance self-test routine
recommended polling time:        (   2) minutes.
SCT capabilities:              (0x103f) SCT Status supported.
                                        SCT Error Recovery Control supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   114   099   006    Pre-fail  Always       -       78420742
  3 Spin_Up_Time            0x0003   095   095   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   099   099   020    Old_age   Always       -       1240
  5 Reallocated_Sector_Ct   0x0033   099   099   036    Pre-fail  Always       -       60
  7 Seek_Error_Rate         0x000f   082   060   030    Pre-fail  Always       -       199357441
  9 Power_On_Hours          0x0032   052   052   000    Old_age   Always       -       42401
 10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   099   037   020    Old_age   Always       -       1240
183 Runtime_Bad_Block       0x0000   098   098   000    Old_age   Offline      -       2
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   094   094   000    Old_age   Always       -       6
188 Command_Timeout         0x0032   100   100   000    Old_age   Always       -       0
189 High_Fly_Writes         0x003a   050   050   000    Old_age   Always       -       50
190 Airflow_Temperature_Cel 0x0022   062   046   045    Old_age   Always       -       38 (Min/Max 30/38)
194 Temperature_Celsius     0x0022   038   054   000    Old_age   Always       -       38 (0 17 0 0 0)
195 Hardware_ECC_Recovered  0x001a   030   012   000    Old_age   Always       -       78420742
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       1
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       1
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       73332271657814
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       2822963046
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       2361465529

SMART Error Log Version: 1
ATA Error Count: 6 (device log contains only the most recent five errors)
        CR = Command Register [HEX]
        FR = Features Register [HEX]
        SC = Sector Count Register [HEX]
        SN = Sector Number Register [HEX]
        CL = Cylinder Low Register [HEX]
        CH = Cylinder High Register [HEX]
        DH = Device/Head Register [HEX]
        DC = Device Command Register [HEX]
        ER = Error register [HEX]
        ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.

Error 6 occurred at disk power-on lifetime: 42372 hours (1765 days + 12 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 00 d9 44 ec 01  Error: UNC at LBA = 0x01ec44d9 = 32261337

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 00 08 d8 44 ec 41 00      09:26:28.967  READ FPDMA QUEUED
  27 00 00 00 00 00 e0 00      09:26:28.941  READ NATIVE MAX ADDRESS EXT [OBS-ACS-3]
  ec 00 00 00 00 00 a0 00      09:26:28.940  IDENTIFY DEVICE
  ef 03 46 00 00 00 a0 00      09:26:28.928  SET FEATURES [Set transfer mode]
  27 00 00 00 00 00 e0 00      09:26:28.901  READ NATIVE MAX ADDRESS EXT [OBS-ACS-3]

Error 5 occurred at disk power-on lifetime: 42372 hours (1765 days + 12 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 00 d9 44 ec 01  Error: UNC at LBA = 0x01ec44d9 = 32261337

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 00 08 d8 44 ec 41 00      09:26:26.095  READ FPDMA QUEUED
  27 00 00 00 00 00 e0 00      09:26:26.069  READ NATIVE MAX ADDRESS EXT [OBS-ACS-3]
  ec 00 00 00 00 00 a0 00      09:26:26.068  IDENTIFY DEVICE
  ef 03 46 00 00 00 a0 00      09:26:26.055  SET FEATURES [Set transfer mode]
  27 00 00 00 00 00 e0 00      09:26:26.029  READ NATIVE MAX ADDRESS EXT [OBS-ACS-3]

Error 4 occurred at disk power-on lifetime: 42372 hours (1765 days + 12 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 00 d9 44 ec 01  Error: UNC at LBA = 0x01ec44d9 = 32261337

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 00 08 d8 44 ec 41 00      09:26:23.222  READ FPDMA QUEUED
  27 00 00 00 00 00 e0 00      09:26:23.195  READ NATIVE MAX ADDRESS EXT [OBS-ACS-3]
  ec 00 00 00 00 00 a0 00      09:26:23.194  IDENTIFY DEVICE
  ef 03 46 00 00 00 a0 00      09:26:23.182  SET FEATURES [Set transfer mode]
  27 00 00 00 00 00 e0 00      09:26:23.137  READ NATIVE MAX ADDRESS EXT [OBS-ACS-3]

Error 3 occurred at disk power-on lifetime: 42372 hours (1765 days + 12 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 00 d9 44 ec 01  Error: UNC at LBA = 0x01ec44d9 = 32261337

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 00 08 d8 44 ec 41 00      09:26:20.351  READ FPDMA QUEUED
  60 00 80 e8 44 ec 41 00      09:26:20.350  READ FPDMA QUEUED
  27 00 00 00 00 00 e0 00      09:26:20.324  READ NATIVE MAX ADDRESS EXT [OBS-ACS-3]
  ec 00 00 00 00 00 a0 00      09:26:20.323  IDENTIFY DEVICE
  ef 03 46 00 00 00 a0 00      09:26:20.311  SET FEATURES [Set transfer mode]

Error 2 occurred at disk power-on lifetime: 42372 hours (1765 days + 12 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  40 51 00 d9 44 ec 01  Error: UNC at LBA = 0x01ec44d9 = 32261337

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  60 00 80 e8 44 ec 41 00      09:26:17.478  READ FPDMA QUEUED
  60 00 40 a8 44 ec 41 00      09:26:17.478  READ FPDMA QUEUED
  60 00 20 88 44 ec 41 00      09:26:17.476  READ FPDMA QUEUED
  60 00 08 80 44 ec 41 00      09:26:17.453  READ FPDMA QUEUED
  27 00 00 00 00 00 e0 00      09:26:17.427  READ NATIVE MAX ADDRESS EXT [OBS-ACS-3]

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]


SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

# 

EDIT 5

I realized that after unplugging /dev/sdb, previous /dev/sdc is now /dev/sdb. I confirmed with smartctl -a /dev/sdb that the serial number has changed after booting with the bad disk unplugged. I'm unlucky and the drive is out of warranty, so I will get myself a new replacement drive.

stdcerr
  • 246
  • 3
  • 10
  • FYI: You should see [UU] for a healthy mirror, not [U_]. As you thought, it means one of your mirrors is missing. @Halfgaar's suggestion is best. – Tim S. Sep 21 '15 at 15:30
  • Re edit 6, have you done the `mdadm --manage ... --add ...` that Halfgaar suggested? – MadHatter Nov 01 '15 at 16:23
  • 2
    One of the key things about Stack Exchange sites is that they are not forums and there are no threads. You asked a question, it has been answered. If you have more (related) questions, you should ask them as new questions and if applicable reference the original. Before you do that thoughyou might want to search the site. – user9517 Nov 01 '15 at 16:49

1 Answers1

11

Seeing as how you don't see the broken drive (marked with F) in the output of cat /proc/mdstat, you have booted the server since the array was degraded.

You can obtain info with mdadm --detail /dev/md0. That will probably tell you which other drive should be in it.

To respond to your edit:

I would analyze /dev/sdb first. Use smartctl -a to check (especially) the reallocated sector count and the error log. Do a self test with smartctl -t long /dev/sdb. Use badblocks, etc.

Then:

  • If you replace /dev/sdb, copy the partition table from /dev/sdc. If they're not GPT, you can use sfdisk -d /dev/sdc | sfdisk /dev/sdb. Or if they are GPT, you can use gdisk to save the partition table to file, and then load it. It's hidden under advanced functions.
  • Something general to consider: if your (new) drive has 4k sectors, make sure the partitions are 4k aligned.
  • If you're going to re-add your existing /dev/sdb, you may want to run mdadm --zero-superblock on all existing partitions.
  • Then you can mdadm --manage /dev/md0 --add /dev/sdb6 and the same for md1 and sdb7

Needless to say, some commands wipe out your data if you mix up your drives. So, be sure what is sdc and sdb...

Edit: about bad blocks: If any software level tool sees badblocks, the drive is busted. Normally, disks hide them by reallocating them transparantly upon write. Google for 'hard drive sector reallocation'. Your smartctl -a output should show reallocated sectors for sdb. So yeah, your sdb has been kicked out of the array and you need to replace it.

Edit: about the smartctl -a output. There are two things in there that are of primary importance:

  • It shows 60 reallocated sectors. Even though the normalized value is still 99 and only would officially be 'bad' if it reached 36 (it counts down), you shouldn't trust disks that starts reallocting sectors. So especially if this value starts changing, the raw value, it's important. You can even configure smartd to monitor it for you.
  • The error log shows entries at age 42372 hours. You can tell that was recent, because of parameter 9 (in your case), Power on hours. There are harmless things that can cause SMART error log entries, like giving wrong ATA commands, but in this case, because you have a degraded array, it's likely they are related.

As for determining which disk it is in your system; for example, doing dmesg |grep -i sdb will help. You probably have three disks in your system and sdb is the one on your second SATA controller, which can be named 1 or 2, depending if it's zero-based or one-based.

Because you likely boot from sda, you can just replace sdb and perform the operations I outlined above. If your boot drive is broken, you hope that you have:

  • Installed grub on the other disk(s) as well.
  • Have a server that can actually boot from another disk.

The other day with a Dell server, it didn't want to start from sdb when there was a blank sda in it. That took some convincing and improvising.

Sometimes you need to translate names like ata1.01 to real device names. For example, failing disks will give kernel errors saying 'ATA exception on ata1.01' or words to that effect. Read this answer for that. (I configured our central logging system to warn me of those kernel errors, because they are a reliable indication of pending disk failure).

Halfgaar
  • 7,921
  • 5
  • 42
  • 81