3

I get the following error:

[root@mediaserv ~]# mount /dev/mapper/media1 /media
mount: /media: can't read superblock on /dev/mapper/media1.

This is Fedora 33. I have a RAID5 of 8x 8TB WD Red drives running on an Adaptec 7805Q RAID controller, this is /dev/sdc. I have one GPT partition on it, /dev/sdc1, that is encrypted with LUKSv2 with an XFS filesystem.

[root@mediaserv ~]# lsblk /dev/sdc
NAME       MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sdc          8:32   1 50.9T  0 disk
└─sdc1       8:33   1 50.9T  0 part
  └─media1 253:0    0 50.9T  0 crypt
[root@mediaserv ~]#

The RAID ended up in degraded mode. In all likelyhood I bumped a cable on the first drive when installing a new fan. Anyway, after booting up in ran in degraded mode for several hours before I caught it. I shut it down, booted to single user mode from a rescue image, then let it run to rebuild the array. This took about 14 hours.

Booting it back up, I am prompted for the partition's LUKs password, but it just sits there. I let this run for about 8 hours not sure if something was being fixed in the background.

I booted from rescue again. Commented out the filesystem from /etc/crypttab and /etc/fstab and am able to log into the system w/o the /media filesystem mounted.

I was able to run cryptsetup luksOpen /dev/sdc1 media1 successfully; the partition seems to decrypt w/o error.

When I run the mount command (above), I get the following in /var/log/messages:

Jan  5 10:23:00 mediaserv kernel: sd 12:0:0:0: [sdc] tag#340 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s
Jan  5 10:23:00 mediaserv kernel: sd 12:0:0:0: [sdc] tag#340 Sense Key : Hardware Error [current]
Jan  5 10:23:00 mediaserv kernel: sd 12:0:0:0: [sdc] tag#340 Add. Sense: Internal target failure
Jan  5 10:23:00 mediaserv kernel: sd 12:0:0:0: [sdc] tag#340 CDB: Read(16) 88 00 00 00 00 00 00 00 11 00 00 00 00 01 00 00
Jan  5 10:23:00 mediaserv kernel: blk_update_request: critical target error, dev sdc, sector 34816 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
Jan  5 10:23:00 mediaserv kernel: sd 12:0:0:0: [sdc] tag#341 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s
Jan  5 10:23:00 mediaserv kernel: sd 12:0:0:0: [sdc] tag#341 Sense Key : Hardware Error [current]
Jan  5 10:23:00 mediaserv kernel: sd 12:0:0:0: [sdc] tag#341 Add. Sense: Internal target failure
Jan  5 10:23:00 mediaserv kernel: sd 12:0:0:0: [sdc] tag#341 CDB: Read(16) 88 00 00 00 00 00 00 00 11 00 00 00 00 01 00 00
Jan  5 10:23:00 mediaserv kernel: blk_update_request: critical target error, dev sdc, sector 34816 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Jan  5 10:23:00 mediaserv kernel: Buffer I/O error on dev dm-0, logical block 0, async page read
Jan  5 10:23:00 mediaserv kernel: sd 12:0:0:0: [sdc] tag#342 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s
Jan  5 10:23:00 mediaserv kernel: sd 12:0:0:0: [sdc] tag#342 Sense Key : Hardware Error [current]
Jan  5 10:23:00 mediaserv kernel: sd 12:0:0:0: [sdc] tag#342 Add. Sense: Internal target failure
Jan  5 10:23:00 mediaserv kernel: sd 12:0:0:0: [sdc] tag#342 CDB: Read(16) 88 00 00 00 00 00 00 00 11 00 00 00 00 01 00 00
Jan  5 10:23:00 mediaserv kernel: blk_update_request: critical target error, dev sdc, sector 34816 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Jan  5 10:23:00 mediaserv kernel: EXT4-fs (dm-0): unable to read superblock
Jan  5 10:23:00 mediaserv kernel: sd 12:0:0:0: [sdc] tag#343 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s
Jan  5 10:23:00 mediaserv kernel: sd 12:0:0:0: [sdc] tag#343 Sense Key : Hardware Error [current]
Jan  5 10:23:00 mediaserv kernel: sd 12:0:0:0: [sdc] tag#343 Add. Sense: Internal target failure
Jan  5 10:23:00 mediaserv kernel: sd 12:0:0:0: [sdc] tag#343 CDB: Read(16) 88 00 00 00 00 00 00 00 11 00 00 00 00 01 00 00
Jan  5 10:23:00 mediaserv kernel: blk_update_request: critical target error, dev sdc, sector 34816 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Jan  5 10:23:00 mediaserv kernel: EXT4-fs (dm-0): unable to read superblock
Jan  5 10:23:00 mediaserv kernel: sd 12:0:0:0: [sdc] tag#344 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s
Jan  5 10:23:00 mediaserv kernel: sd 12:0:0:0: [sdc] tag#344 Sense Key : Hardware Error [current]
Jan  5 10:23:00 mediaserv kernel: sd 12:0:0:0: [sdc] tag#344 Add. Sense: Internal target failure
Jan  5 10:23:00 mediaserv kernel: sd 12:0:0:0: [sdc] tag#344 CDB: Read(16) 88 00 00 00 00 00 00 00 11 00 00 00 00 01 00 00
Jan  5 10:23:00 mediaserv kernel: blk_update_request: critical target error, dev sdc, sector 34816 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Jan  5 10:23:00 mediaserv kernel: EXT4-fs (dm-0): unable to read superblock
Jan  5 10:23:00 mediaserv kernel: ISOFS: unsupported/invalid hardware sector size 4096
Jan  5 10:23:00 mediaserv kernel: sd 12:0:0:0: [sdc] tag#345 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s
Jan  5 10:23:00 mediaserv kernel: sd 12:0:0:0: [sdc] tag#345 Sense Key : Hardware Error [current]
Jan  5 10:23:00 mediaserv kernel: sd 12:0:0:0: [sdc] tag#345 Add. Sense: Internal target failure
Jan  5 10:23:00 mediaserv kernel: sd 12:0:0:0: [sdc] tag#345 CDB: Read(16) 88 00 00 00 00 00 00 00 11 00 00 00 00 01 00 00
Jan  5 10:23:00 mediaserv kernel: blk_update_request: critical target error, dev sdc, sector 34816 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Jan  5 10:23:00 mediaserv kernel: FAT-fs (dm-0): unable to read boot sector

I have attempted to run xfs_repair, but have not tried the -L option yet.

[root@mediaserv ~]# xfs_repair /dev/mapper/media1
Phase 1 - find and verify superblock...
superblock read failed, offset 0, size 524288, ag 0, rval -1

fatal error -- Remote I/O error

I am not certain where I should go next, I'm concerned I may run the wrong command and cause more damage. Any help would certainly be appreciated.

Thanks!

-Mike

EDIT:

After some more investigation, I don't think it's a superblock issue, I think that error was because I didn't specify the filesystem type in the mount command. Re-running it more properly, I get:

[root@mediaserv ~]# mount -t xfs /dev/mapper/media1 /media
mount: /media: mount(2) system call failed: Remote I/O error.

Which drops the following into my /var/log/messages:

Jan  5 12:15:43 mediaserv kernel: sd 12:0:0:0: [sdc] tag#838 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s
Jan  5 12:15:43 mediaserv kernel: sd 12:0:0:0: [sdc] tag#838 Sense Key : Hardware Error [current]
Jan  5 12:15:43 mediaserv kernel: sd 12:0:0:0: [sdc] tag#838 Add. Sense: Internal target failure
Jan  5 12:15:43 mediaserv kernel: sd 12:0:0:0: [sdc] tag#838 CDB: Read(16) 88 00 00 00 00 00 00 00 11 00 00 00 00 01 00 00
Jan  5 12:15:43 mediaserv kernel: blk_update_request: critical target error, dev sdc, sector 34816 op 0x0:(READ) flags 0x1000 phys_seg 1 prio class 0
Jan  5 12:15:43 mediaserv kernel: XFS (dm-0): SB validate failed with error -121.

I'm not sure how to interpret that. Bad data starting at sector 34816?

EDIT #2:

Regarding the RAID Array health. As I mentioned it did go into degraded mode with the lost drive. I took it out of service and into single user mode while the RAID rebuilt. The following is the output of the Adaptec tool after the rebuild (I have trimmed it down to be less verbose):

arcconf getconfig 1
----------------------------------------------------------------------
Controller information
----------------------------------------------------------------------
   Controller Status                        : Optimal
   Controller Mode                          : RAID (Expose RAW)
   Controller Model                         : Adaptec ASR7805Q
   Performance Mode                         : Big Block Bypass
   --------------------------------------------------------
   RAID Properties
   --------------------------------------------------------
   Logical devices/Failed/Degraded          : 1/0/0
   Copyback                                 : Disabled
   Automatic Failover                       : Enabled
   Background consistency check             : Disabled
   Background consistency check period      : 0
----------------------------------------------------------------------
Logical device information
----------------------------------------------------------------------
Logical Device number 0
   Logical Device name                      : media
   Block Size of member drives              : 4K Bytes
   RAID level                               : 5
   Status of Logical Device                 : Optimal
   Size                                     : 53387257 MB
   Parity space                             : 7626751 MB
   Stripe-unit size                         : 1024 KB
   Interface Type                           : Serial ATA
   Device Type                              : HDD
   Read-cache setting                       : Enabled
   Read-cache status                        : On
   Write-cache setting                      : On when protected by battery/ZMM
   Write-cache status                       : On
   maxCache read cache setting              : Enabled
   maxCache read cache status               : Off
   maxCache write cache setting             : Disabled
   maxCache write cache status              : Off
   Partitioned                              : Yes
   Protected by Hot-Spare                   : No
   Bootable                                 : Yes
   Failed stripes                           : Yes
   Power settings                           : Disabled
----------------------------------------------------------------------
Physical Device information
----------------------------------------------------------------------
      Device #0
         Device is a Hard drive
         State                              : Online
         Block Size                         : 4K Bytes
      Device #1
         Device is a Hard drive
         State                              : Online
         Block Size                         : 4K Bytes
      Device #2
         Device is a Hard drive
         State                              : Online
         Block Size                         : 4K Bytes
      Device #3
         Device is a Hard drive
         State                              : Online
         Block Size                         : 4K Bytes
      Device #4
         Device is a Hard drive
         State                              : Online
         Block Size                         : 4K Bytes
      Device #5
         Device is a Hard drive
         State                              : Online
         Block Size                         : 4K Bytes
      Device #6
         Device is a Hard drive
         State                              : Online
         Block Size                         : 4K Bytes
      Device #7
         Device is a Hard drive
         State                              : Online
         Block Size                         : 4K Bytes

This is the SMART status of each of the drives in the array:

[root@mediaserv ~]# smartctl -a -d "aacraid,0,0,0" /dev/sdc | grep health
SMART overall-health self-assessment test result: PASSED
[root@mediaserv ~]# smartctl -a -d "aacraid,0,0,1" /dev/sdc | grep health
SMART overall-health self-assessment test result: PASSED
[root@mediaserv ~]# smartctl -a -d "aacraid,0,0,2" /dev/sdc | grep health
SMART overall-health self-assessment test result: PASSED
[root@mediaserv ~]# smartctl -a -d "aacraid,0,0,3" /dev/sdc | grep health
SMART overall-health self-assessment test result: PASSED
[root@mediaserv ~]# smartctl -a -d "aacraid,0,0,4" /dev/sdc | grep health
SMART overall-health self-assessment test result: PASSED
[root@mediaserv ~]# smartctl -a -d "aacraid,0,0,5" /dev/sdc | grep health
SMART overall-health self-assessment test result: PASSED
[root@mediaserv ~]# smartctl -a -d "aacraid,0,0,6" /dev/sdc | grep health
SMART overall-health self-assessment test result: PASSED
[root@mediaserv ~]# smartctl -a -d "aacraid,0,0,7" /dev/sdc | grep health
SMART overall-health self-assessment test result: PASSED

HOWEVER, not but a couple hours ago pouring through the logs did I find the following:

Jan  4 08:25:25 mediaserv kernel: sd 12:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=9s
Jan  4 08:25:25 mediaserv kernel: sd 12:0:0:0: [sdc] tag#0 Sense Key : Hardware Error [current]
Jan  4 08:25:25 mediaserv kernel: sd 12:0:0:0: [sdc] tag#0 Add. Sense: Internal target failure
Jan  4 08:25:25 mediaserv kernel: sd 12:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 01 60 2f 5c bf 00 00 00 20 00 00
Jan  4 08:25:25 mediaserv kernel: blk_update_request: critical target error, dev sdc, sector 47269471736 op 0x0:(READ) flags 0x80700 phys_seg 5 prio class 0

Five of the above in sequence, which are still continuing in the logs, and the following at the same time that the machine lost the filesystem:

Jan  4 08:26:32 mediaserv kernel: aacraid: Host adapter abort request.#012aacraid: Outstanding commands on (12,0,0,0):
Jan  4 08:26:32 mediaserv kernel: aacraid: Host adapter abort request.#012aacraid: Outstanding commands on (12,0,0,0):
Jan  4 08:26:32 mediaserv kernel: aacraid: Host adapter abort request.#012aacraid: Outstanding commands on (12,0,0,0):
Jan  4 08:26:55 mediaserv kernel: aacraid: Host adapter abort request.#012aacraid: Outstanding commands on (12,0,0,0):
Jan  4 08:26:55 mediaserv kernel: aacraid: Host bus reset request. SCSI hang ?
Jan  4 08:26:55 mediaserv kernel: aacraid 0000:02:00.0: outstanding cmd: midlevel-0
Jan  4 08:26:55 mediaserv kernel: aacraid 0000:02:00.0: outstanding cmd: lowlevel-0
Jan  4 08:26:55 mediaserv kernel: aacraid 0000:02:00.0: outstanding cmd: error handler-0
Jan  4 08:26:55 mediaserv kernel: aacraid 0000:02:00.0: outstanding cmd: firmware-56
Jan  4 08:26:55 mediaserv kernel: aacraid 0000:02:00.0: outstanding cmd: kernel-0
Jan  4 08:26:55 mediaserv kernel: aacraid 0000:02:00.0: Controller reset type is 3
Jan  4 08:26:55 mediaserv kernel: aacraid 0000:02:00.0: Issuing IOP reset
Jan  4 08:27:30 mediaserv kernel: aacraid 0000:02:00.0: IOP reset succeeded
Jan  4 08:27:30 mediaserv kernel: aacraid: Comm Interface type2 enabled
Jan  4 08:27:56 mediaserv kernel: aacraid 0000:02:00.0: Scheduling bus rescan

The interesting thing to note is the array went into degraded mode, then 10 hours and 15 mins later the above happened. So the array issue and the xfs filesystem issue were hours apart. And although the array and drives report healthly now, I am still receiving the "FAILED Result" block above.

  • You need to go back to the Adaptec tools to figure out what's wrong with the array. But here's a preview: RAID 5 is not safe for arrays of that size (and so it's quite surprising you created this at all). You should mentally prepare for the need to restore from backup. – Michael Hampton Jan 06 '21 at 03:46
  • Thanks for the reply Michael, however, there's not much that isn't already obvious in your answer. I'll edit my post to include details about the array. However, in summary, although the array did go into degraded mode it *shouldn't* have fouled up the file system. It's a member in a swarm of machines, with data replication across all of them, the data is not an issue. Before I reformat the file system and place it back into the swarm I'd like to know why a RAID in degraded mode, with healthy drives, would otherwise get a corrupt XFS file system. And 2nd, is the file system repairable? – Michael Wilkinson Jan 06 '21 at 05:54
  • Hmm, is your controller going bad? It is quite old, after all. Has it got the latest firmware? – Michael Hampton Jan 06 '21 at 06:44
  • I don't know. I suppose it could be. I don't have another controller handy to test with. I went ahead and wiped out the data and reformatted to straight XFS. Simply using this command: `dd if=/dev/zero of=/media/testfile bs=1G oflag=dsync` I ran it until it reached 25%...no issues. However, dropping LUKS on top of it, I start getting errors in the log when it hits somewhere around 10%. I used the following command to set up the encryption: `cryptsetup luksFormat /dev/sdc1 -q --verify-passphrase --sector-size=4096` – Michael Wilkinson Jan 06 '21 at 15:06
  • I can't really compare it to the other machines, this one is different hardware. The other two are using smaller drives in a software-based array...and no encryption. I think I'm going to go back to no encryption, put it back into production and see if it has any issues over the next few days. In the meantime I may source another controller card just in case. – Michael Wilkinson Jan 06 '21 at 15:07
  • That's a good idea. Lots of used and some occasional new old stock cards on eBay, so no reason not to have a spare or two. – Michael Hampton Jan 06 '21 at 18:26

0 Answers0