2

I am recovering Dell Poweredge R510 server running Scientific Linux 5.5 after an unexpected power outage. The server was set up by our previous system administrator (I'm a grad student). Upon reboot, I see the message:

fsck.ext3: Device or resource busy when trying to open /dev/sdb1 
File system mounted or opened exclusively by another program?

/dev/sdb1 is the /home directory and consist of a Megaraid-controlled RAID5 consisting of 9x600 GB SAS disks.

# megacli -AdpAllInfo -aALL                                 
Adapter #0

==============================================================================
                    Versions
                ================
Product Name    : PERC H700 Integrated
Serial No       : 06C006X
FW Package Build: 12.3.0-0032

                    Mfg. Data
                ================
Mfg. Date       : 06/19/10
Rework Date     : 06/19/10
Revision No     : A00
Battery FRU     : N/A

                Image Versions in Flash:
                ================
BIOS Version       : 3.09.00
FW Version         : 2.30.03-0872
Preboot CLI Version: 04.02-004:#%00008
Ctrl-R Version     : 2.02-0009
NVDATA Version     : 2.03.0053
Boot Block Version : 2.02.00.00-0000
BOOT Version       : 01.250.04.219

                Pending Images in Flash
                ================
None

                PCI Info
                ================
Vendor Id       : 1000
Device Id       : 0079
SubVendorId     : 1028
SubDeviceId     : 1f17

Host Interface  : PCIE

Number of Frontend Port: 0 
Device Interface  : PCIE

Number of Backend Port: 8 
Port  :  Address
0        500065b36789abff 
1        0000000000000000 
2        0000000000000000 
3        0000000000000000 
4        0000000000000000 
5        0000000000000000 
6        0000000000000000 
7        0000000000000000 

                HW Configuration
                ================
SAS Address      : 5842b2b01789d900
BBU              : Present
Alarm            : Absent
NVRAM            : Present
Serial Debugger  : Present
Memory           : Present
Flash            : Present
Memory Size      : 1024MB
TPM              : Absent
On board Expander: Absent
Upgrade Key      : Absent

                Settings
                ================
Current Time                     : 3:20:56 1/13, 2013
Predictive Fail Poll Interval    : 300sec
Interrupt Throttle Active Count  : 16
Interrupt Throttle Completion    : 50us
Rebuild Rate                     : 30%
PR Rate                          : 30%
BGI Rate                         : 30%
Check Consistency Rate           : 30%
Reconstruction Rate              : 30%
Cache Flush Interval             : 4s
Max Drives to Spinup at One Time : 4
Delay Among Spinup Groups        : 12s
Physical Drive Coercion Mode     : 128MB
Cluster Mode                     : Disabled
Alarm                            : Disabled
Auto Rebuild                     : Enabled
Battery Warning                  : Enabled
Ecc Bucket Size                  : 15
Ecc Bucket Leak Rate             : 1440 Minutes
Restore HotSpare on Insertion    : Disabled
Expose Enclosure Devices         : Disabled
Maintain PD Fail History         : Disabled
Host Request Reordering          : Enabled
Auto Detect BackPlane Enabled    : SGPIO/i2c SEP
Load Balance Mode                : Auto
Use FDE Only                     : Yes
Security Key Assigned            : No
Security Key Failed              : No
Security Key Not Backedup        : No
Any Offline VD Cache Preserved   : No
Allow Boot with Preserved Cache  : No
Disable Online Controller Reset  : No
PFK in NVRAM                     : No
Use disk activity for locate     : No

                Capabilities
                ================
RAID Level Supported             : RAID0, RAID1, RAID5, RAID6, RAID00, RAID10, RAID50, RAID60, PRL 11, PRL 11 with spanning, PRL11-RLQ0 DDF layout with no span, PRL11-RLQ0 DDF layout with span
Supported Drives                 : SAS, SATA

Allowed Mixing:

Mix in Enclosure Allowed

                Status
                ================
ECC Bucket Count                 : 0

                Limitations
                ================
Max Arms Per VD          : 32 
Max Spans Per VD         : 8 
Max Arrays               : 128 
Max Number of VDs        : 64 
Max Parallel Commands    : 1008 
Max SGE Count            : 60 
Max Data Transfer Size   : 8192 sectors 
Max Strips PerIO         : 42 
Min Strip Size          : 8 KB
Max Strip Size          : 1.0 MB
Max Configurable CacheCade Size: 0 GB
Current Size of CacheCade      : 0 GB
Current Size of FW Cache       : 0 MB

                Device Present
                ================
Virtual Drives    : 2 
  Degraded        : 0 
  Offline         : 0 
Physical Devices  : 14 
  Disks           : 12 
  Critical Disks  : 0 
  Failed Disks    : 0 

                Supported Adapter Operations
                ================
Rebuild Rate                    : Yes
CC Rate                         : Yes
BGI Rate                        : Yes
Reconstruct Rate                : Yes
Patrol Read Rate                : Yes
Alarm Control                   : Yes
Cluster Support                 : No
BBU                             : Yes
Spanning                        : Yes
Dedicated Hot Spare             : Yes
Revertible Hot Spares           : Yes
Foreign Config Import           : Yes
Self Diagnostic                 : Yes
Allow Mixed Redundancy on Array : No
Global Hot Spares               : Yes
Deny SCSI Passthrough           : No
Deny SMP Passthrough            : No
Deny STP Passthrough            : No
Support Security                : Yes
Snapshot Enabled                : No
Support the OCE without adding drives : Yes
Support PFK                     : No

                Supported VD Operations
                ================
Read Policy          : Yes
Write Policy         : Yes
IO Policy            : Yes
Access Policy        : Yes
Disk Cache Policy    : Yes
Reconstruction       : Yes
Deny Locate          : No
Deny CC              : No
Allow Ctrl Encryption: No
Enable LDBBM         : Yes

                Supported PD Operations
                ================
Force Online                            : Yes
Force Offline                           : Yes
Force Rebuild                           : Yes
Deny Force Failed                       : No
Deny Force Good/Bad                     : No
Deny Missing Replace                    : No
Deny Clear                              : No
Deny Locate                             : No
Disable Copyback                        : No
Enable JBOD                             : No
Enable Copyback on SMART                : No
Enable Copyback to SSD on SMART Error   : No
Enable SSD Patrol Read                  : No
PR Correct Unconfigured Areas           : Yes
Enable Spin Down of UnConfigured Drives : No
Disable Spin Down of hot spares         : Yes
Spin Down time                          : 30 
                Error Counters
                ================
Memory Correctable Errors   : 0 
Memory Uncorrectable Errors : 0 

                Cluster Information
                ================
Cluster Permitted     : No
Cluster Active        : No

                Default Settings
                ================
Phy Polarity                     : 0 
Phy PolaritySplit                : 0 
Background Rate                  : 30 
Strip Size                      : 64kB
Flush Time                       : 4 seconds
Write Policy                     : WB
Read Policy                      : Adaptive
Cache When BBU Bad               : Disabled
Cached IO                        : No
SMART Mode                       : Mode 6
Alarm Disable                    : Yes
Coercion Mode                    : 128MB
ZCR Config                       : Unknown
Dirty LED Shows Drive Activity   : No
BIOS Continue on Error           : No
Spin Down Mode                   : None
Allowed Device Type              : SAS/SATA Mix
Allow Mix in Enclosure           : Yes
Allow HDD SAS/SATA Mix in VD     : No
Allow SSD SAS/SATA Mix in VD     : No
Allow HDD/SSD Mix in VD          : No
Allow SATA in Cluster            : No
Max Chained Enclosures           : 1 
Disable Ctrl-R                   : No
Enable Web BIOS                  : No
Direct PD Mapping                : Yes
BIOS Enumerate VDs               : Yes
Restore Hot Spare on Insertion   : No
Expose Enclosure Devices         : No
Maintain PD Fail History         : No
Disable Puncturing               : No
Zero Based Enclosure Enumeration : Yes
PreBoot CLI Enabled              : No
LED Show Drive Activity          : Yes
Cluster Disable                  : Yes
SAS Disable                      : No
Auto Detect BackPlane Enable     : SGPIO/i2c SEP
Use FDE Only                     : Yes
Enable Led Header                : No
Delay during POST                : 0 
EnableCrashDump                  : No
Disable Online Controller Reset  : No
EnableLDBBM                      : Yes
Un-Certified Hard Disk Drives    : Allow
Treat Single span R1E as R10     : Yes
Max LD per array                 : 16
Power Saving option              : Disable all power saving options
Default spin down time in minutes: 30 
Enable JBOD                      : No

Exit Code: 0x00

dmesg just before the error displays:

device-mapper: multipath: version 1.0.5 loaded
device-mapper: multipath round-robin: version 1.0.0
device-mapper: table 253:0: multipath: error getting device
device-mapper: ioctl: error: adding target to table
device-mapper: table 253:0: multipath: error getting device
device-mapper: ioctl: error: adding target to table

If I comment out the corresponding entry in the /etc/fstab file and reboot

LABEL=/home         /home           ext3 defaults   1  2

the system boots as normal (but without the disk). However, I still cannot mount the disk. A little investigation further yields the following:

# mount /dev/sdb1 /home
mount: /dev/sdb1 already mounted or /home busy
# lsof /dev/sdb
COMMAND     PID  USER   FD    TYPE  DEVICE  SIZE  MODE  NAME
multipath   3864 root    5r    BLK    8,16        2582 /dev/sdb 
# fuser /dev/sdb
          3864
#ps -ef | grep 3864
3864     1   0   19:22  ?    00:00:00 /sbin/multipathd

Apparently multipath is preventing me from mounting the disk manually. Would it be safe or proper for me to kill the multipath daemon? The paths and config for multipathd are as follows:

multipathd> show paths
hcil    dev dev_t pri dm_st   chk_st  next_check    
0:2:0:0 sda 8:0   1   [undef] [ready] [orphan]
0:2:1:0 sdb 8:16  1   [active][ready] XXXXXX.... 13/20    
multipathd> show config
defaults {
    verbosity 2
    user_friendly_names yes 
}
blacklist {
    devnode ^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*
    devnode ^hd[a-z]
    device {
        vendor DGC 
        product LUNZ
    }   
    device {
        vendor EMC 
        product LUNZ
    }   
    device {
        vendor IBM 
        product S/390.*
    }   
    device {
        vendor IBM 
        product S/390.*
    }   
    device {
        vendor STK 
        product Universal Xport
    }   
}
blacklist_exceptions {
}
devices {
    device {
        vendor NETAPP
        product LUN 
        path_grouping_policy multibus
        path_checker directio
        features 1 queue_if_no_path
        prio_callout /sbin/mpath_prio_ontap /dev/%n
        failback immediate
        flush_on_last_del yes 
    }   
    device {
      vendor APPLE*
        product Xserve RAID 
        path_grouping_policy multibus
    }
    device {
        vendor 3PARdata
        product VV
        path_grouping_policy multibus
    }
    device {
        vendor DEC
        product HSG80
        path_grouping_policy group_by_prio
        path_checker hp_sw
        features 1 queue_if_no_path
        hardware_handler 1 hp-sw
        prio_callout /sbin/mpath_prio_hp_sw /dev/%n
    }
    device {
        vendor COMPAQ
        product (MSA|HSV)1.0.*
        path_grouping_policy group_by_prio
        path_checker hp_sw
        features 1 queue_if_no_path
        hardware_handler 1 hp-sw
        prio_callout /sbin/mpath_prio_hp_sw /dev/%n
        no_path_retry 12
        rr_min_io 100
    }
    device {
        vendor (COMPAQ|HP)
        product HSV1[01]1|HSV2[01]0|HSV300|HSV4[05]|HSV4[05]0
        path_grouping_policy group_by_prio
        path_checker tur
        prio_callout /sbin/mpath_prio_alua /dev/%n
        failback immediate
        no_path_retry 12
        rr_min_io 100
    }
    device {
        vendor (COMPAQ|HP)
        product MSA VOLUME
        path_grouping_policy group_by_prio
        path_checker tur
        prio_callout /sbin/mpath_prio_alua /dev/%n
        failback immediate
        no_path_retry 12
        rr_min_io 100
    }
    device {
        vendor HP
        product MSA2[02]12fc|MSA2012i
        path_grouping_policy multibus
        path_checker tur
        prio_callout /bin/true
        failback immediate
        no_path_retry 18
        rr_min_io 100
    }
    device {
        vendor HP
        product MSA2012sa|MSA23(12|24)(fc|i|sa)|MSA2000s VOLUME
        path_grouping_policy group_by_prio
        path_checker tur
        prio_callout /sbin/mpath_prio_alua /dev/%n
        failback immediate
        no_path_retry 18
        rr_min_io 100
    }
    device {
        vendor HP
        product HSVX700
        path_grouping_policy group_by_prio
        path_checker tur
        hardware_handler 1 alua
        prio_callout /sbin/mpath_prio_alua /dev/%n
        failback immediate
        no_path_retry 12
        rr_min_io 100
    }
    device {
        vendor HP
        product A6189A
        path_grouping_policy multibus
    }
    device {
        vendor DDN
        product SAN DataDirector
        path_grouping_policy multibus
    }
    device {
        vendor EMC
        product SYMMETRIX
        path_grouping_policy multibus
        getuid_callout /sbin/scsi_id -g -u -ppre-spc3-83 -s /block/%n
    }
    device {
        vendor DGC
        product .*
        product_blacklist LUNZ
        path_grouping_policy group_by_prio
        path_checker emc_clariion
        features 1 queue_if_no_path
        hardware_handler 1 emc
        prio_callout /sbin/mpath_prio_emc /dev/%n
        failback immediate
        no_path_retry 60
    }
    device {
        vendor FSC
        product CentricStor
        path_grouping_policy group_by_serial
    }
    device {
        vendor (HITACHI|HP)
        product OPEN-.*
        path_grouping_policy multibus
        path_checker tur
        failback immediate
        no_path_retry 12
    }
    device {
        vendor HITACHI
        product DF.*
        path_grouping_policy group_by_prio
        prio_callout /sbin/mpath_prio_hds_modular %d
        failback immediate
    }
    device {
        vendor EMC
        product Invista
        product_blacklist LUNZ
        path_grouping_policy multibus
        path_checker tur
        no_path_retry 5
    }
    device {
        vendor IBM
        product ProFibre 4000R
        path_grouping_policy multibus
    }
    device {
        vendor IBM
        product 1722-600
        path_grouping_policy group_by_prio
        path_checker rdac
        features 1 queue_if_no_path
        hardware_handler 1 rdac
        prio_callout /sbin/mpath_prio_rdac /dev/%n
        failback immediate
        no_path_retry 300
    }
    device {
        vendor IBM
        product 1724
        path_grouping_policy group_by_prio
        path_checker rdac
        features 1 queue_if_no_path
        hardware_handler 1 rdac
        prio_callout /sbin/mpath_prio_rdac /dev/%n
        failback immediate
        no_path_retry 300
    }
    device {
        vendor IBM
        product 1726
        path_grouping_policy group_by_prio
        path_checker rdac
        features 1 queue_if_no_path
        hardware_handler 1 rdac
        prio_callout /sbin/mpath_prio_rdac /dev/%n
        failback immediate
        no_path_retry 300
    }
    device {
        vendor IBM
        product 1742
        path_grouping_policy group_by_prio
        path_checker rdac
        hardware_handler 1 rdac
        prio_callout /sbin/mpath_prio_rdac /dev/%n
        failback immediate
    }
    device {
        vendor IBM
        product 1814
        path_grouping_policy group_by_prio
        path_checker rdac
        hardware_handler 1 rdac
        prio_callout /sbin/mpath_prio_rdac /dev/%n
        failback immediate
        no_path_retry queue
    }
    device {
        vendor IBM
        product 1745|1746
        path_grouping_policy group_by_prio
        path_checker rdac
        features 2 pg_init_retries 50
        hardware_handler 1 rdac
        prio_callout /sbin/mpath_prio_rdac /dev/%n
        failback immediate
        no_path_retry 15
    }
    device {
        vendor IBM
        product 1815
        path_grouping_policy group_by_prio
        path_checker rdac
        hardware_handler 1 rdac
        prio_callout /sbin/mpath_prio_rdac /dev/%n
        failback immediate
        no_path_retry queue
    }
    device {
        vendor IBM
        product 1818
        path_grouping_policy group_by_prio
        path_checker rdac
        hardware_handler 1 rdac
        prio_callout /sbin/mpath_prio_rdac /dev/%n
        failback immediate
        no_path_retry queue
    }
    device {
        vendor IBM
        product 3526
        path_grouping_policy group_by_prio
        path_checker rdac
        hardware_handler 1 rdac
        prio_callout /sbin/mpath_prio_rdac /dev/%n
        failback immediate
    }
    device {
        vendor IBM
        product 3542
        path_grouping_policy group_by_serial
        path_checker tur
    }
    device {
        vendor IBM
        product 2105(800|F20)
        path_grouping_policy group_by_serial
        path_checker tur
        features 1 queue_if_no_path
    }
    device {
        vendor IBM
        product 1750500
        path_grouping_policy group_by_prio
        path_checker tur
        features 1 queue_if_no_path
        prio_callout /sbin/mpath_prio_alua /dev/%n
        failback immediate
    }
    device {
        vendor IBM
        product 2107900
        path_grouping_policy multibus
        path_checker tur
        features 1 queue_if_no_path
    }
    device {
        vendor IBM
        product 2145
        path_grouping_policy group_by_prio
        path_checker tur
        features 1 queue_if_no_path
        prio_callout /sbin/mpath_prio_alua /dev/%n
        failback immediate
    }
    device {
        vendor IBM
        product S/390 DASD ECKD
        product_blacklist S/390.*
        path_grouping_policy multibus
        getuid_callout /sbin/dasd_id /dev/%n
        path_checker directio
        features 1 queue_if_no_path
    }
    device {
        vendor IBM
        product S/390 DASD FBA
        product_blacklist S/390.*
        path_grouping_policy multibus
        getuid_callout /sbin/dasd_id /dev/%n
        path_checker directio
    }
    device {
        vendor NETAPP
        product LUN.*
        path_grouping_policy group_by_prio
        path_checker directio
        features 1 queue_if_no_path
        prio_callout /sbin/mpath_prio_ontap /dev/%n
        failback immediate
        rr_min_io 128
    }
    device {
        vendor IBM
        product Nseries.*
        path_grouping_policy group_by_prio
        features 1 queue_if_no_path
        prio_callout /sbin/mpath_prio_ontap /dev/%n
        failback immediate
        rr_min_io 128
    }
    device {
        vendor Pillar
        product Axiom [35]00
        path_grouping_policy group_by_prio
        path_checker tur
        prio_callout /sbin/mpath_prio_alua %d
    }
    device {
        vendor AIX
        product VDASD
        path_grouping_policy multibus
        path_checker directio
        failback immediate
        no_path_retry 60
    }
    device {
        vendor SGI
        product TP9[13]00
        path_grouping_policy multibus
    }
    device {
        vendor SGI
        product TP9[45]00
        path_grouping_policy group_by_prio
        path_checker rdac
        hardware_handler 1 rdac
        prio_callout /sbin/mpath_prio_rdac /dev/%n
        failback immediate
    }
    device {
        vendor SGI
        product IS.*
        path_grouping_policy group_by_prio
        path_checker rdac
        hardware_handler 1 rdac
        prio_callout /sbin/mpath_prio_rdac /dev/%n
        failback immediate
        no_path_retry queue
    }
    device {
        vendor STK
        product OPENstorage D280
        path_grouping_policy group_by_prio
        path_checker tur
        hardware_handler 1 rdac
        prio_callout /sbin/mpath_prio_rdac /dev/%n
        failback immediate
    }
    device {
        vendor STK
        product FLEXLINE 380
        product_blacklist Universal Xport
        path_grouping_policy group_by_prio
        path_checker rdac
        hardware_handler 1 rdac
        prio_callout /sbin/mpath_prio_rdac /dev/%n
        failback immediate
        no_path_retry queue
    }
    device {
        vendor SUN
        product (StorEdge 3510|T4)
        path_grouping_policy multibus
    }
    device {
        vendor PIVOT3
        product RAIGE VOLUME
        path_grouping_policy multibus
        getuid_callout /sbin/scsi_id -p 0x80 -g -u -d /dev/%n
        path_checker tur
        features 1 queue_if_no_path
        rr_min_io 100
    }
    device {
        vendor SUN
        product CSM200_R
        path_grouping_policy group_by_prio
        path_checker rdac
        hardware_handler 1 rdac
        prio_callout /sbin/mpath_prio_rdac /dev/%n
        failback immediate
        no_path_retry queue
    }
    device {
        vendor SUN
        product LCSM100_F
        path_grouping_policy group_by_prio
        path_checker rdac
        hardware_handler 1 rdac
        prio_callout /sbin/mpath_prio_rdac /dev/%n
        failback immediate
        no_path_retry queue
    }
    device {
        vendor (LSI|ENGENIO)
        product INF.*
        path_grouping_policy group_by_prio
        path_checker rdac
        features 2 pg_init_retries 50
        hardware_handler 1 rdac
        prio_callout /sbin/mpath_prio_rdac /dev/%n
        failback immediate
        no_path_retry 15
    }
    device {
        vendor DELL
        product MD3000|MD3000i
        path_grouping_policy group_by_prio
        path_checker rdac
        features 2 pg_init_retries 50
        hardware_handler 1 rdac
        prio_callout /sbin/mpath_prio_rdac /dev/%n
        failback immediate
        no_path_retry 15
    }
    device {
        vendor DELL
        product MD32xx|MD32xxi
        path_grouping_policy group_by_prio
        path_checker rdac
        features 2 pg_init_retries 50
        hardware_handler 1 rdac
        prio_callout /sbin/mpath_prio_rdac /dev/%n
        failback immediate
        no_path_retry 15
    }
    device {
        vendor COMPELNT
        product Compellent Vol
        path_grouping_policy multibus
        path_checker tur
        failback immediate
        no_path_retry queue
    }
    device {
        vendor GNBD
        product GNBD
        path_grouping_policy multibus
        getuid_callout /sbin/gnbd_import -q -U /block/%n
        path_checker directio
    }
}
multipaths {
}

The contents of my /etc/multipath.conf file:

# Blacklist all local devices
devnode_blacklist {
        devnode "sd[a-b]$"
        devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
        devnode "^hd[a-z]"
        devnode "^cciss!c[0-9]d[0-9]*"
}

## Use user friendly names, instead of using WWIDs as names.
defaults {
        user_friendly_names yes 
}

devices {
        device {
                vendor                  "NETAPP"
                product                 "LUN"
                path_grouping_policy    multibus
                getuid_callout          "/sbin/scsi_id -g -u -s /block/%n"
                prio_callout            "/sbin/mpath_prio_ontap /dev/%n"
                features                "1 queue_if_no_path"
                path_checker            directio
                failback                immediate
                flush_on_last_del       yes 
        }   
}

The drive itself appears to be in good condition. If I use a USB-loaded LiveCD .iso I can mount /dev/sdb1 without any problem. The files all appear to be present. Running fsck -ylv /dev/sdb1, the disk looks fine. The output is seen below:

# fsck.ext3 -fyv /dev/sdc1
e2fsck 1.39 (29-May-2006)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information

 1387340 inodes used (0.12%)
  219427 non-contiguous inodes (15.8%)
         # of inodes with ind/dind/tind blocks: 343585/63330/32
869857188 blocks used (74.28%)
       0 bad blocks
      96 large files

 1310760 regular files
   71629 directories
       0 character device files
       0 block device files
       0 fifos
       1 link
    4942 symbolic links (4497 fast symbolic links)
       0 sockets
--------
 1387332 files

For completeness, output of fdisk -l:

WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.


WARNING: The size of this disk is 4.8 TB (4796404727808 bytes).
DOS partition table format can not be used on drives for volumes
larger than 2.2 TB (2199023255040 bytes). Use parted(1) and GUID
partition table format (GPT).


WARNING: GPT (GUID Partition Table) detected on '/dev/dm-0'! The util fdisk doesn't support GPT. Use GNU Parted.


WARNING: The size of this disk is 4.8 TB (4796404727808 bytes).
DOS partition table format can not be used on drives for volumes
larger than 2.2 TB (2199023255040 bytes). Use parted(1) and GUID
partition table format (GPT).

Disk /dev/dm-1 doesn't contain a valid partition table

Disk /dev/sda: 599.5 GB, 599550590976 bytes
255 heads, 63 sectors/track, 72891 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        5099    40957686   83  Linux
/dev/sda2            5100       69194   514843087+  83  Linux
/dev/sda3           69195       71744    20482875   82  Linux swap / Solaris
/dev/sda4           71745       72891     9213277+   5  Extended
/dev/sda5           71745       72381     5116671   83  Linux
/dev/sda6           72382       72891     4096543+  83  Linux

Disk /dev/sdb: 4796.4 GB, 4796404727808 bytes
255 heads, 63 sectors/track, 583129 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      267350  2147483647+  ee  EFI GPT

Disk /dev/dm-0: 4796.4 GB, 4796404727808 bytes
255 heads, 63 sectors/track, 583129 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

     Device Boot      Start         End      Blocks   Id  System
/dev/dm-0p1               1      267350  2147483647+  ee  EFI GPT

Disk /dev/dm-1: 4796.4 GB, 4796404693504 bytes
255 heads, 63 sectors/track, 583129 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

I suspect that the changing some settings in a configuration file, either for multipath or for some other utility, will remedy my problem. However, I'm not sure how to proceed.

dhinckley
  • 123
  • 1
  • 4
  • 1
    Multipath wouldn't be involved if you just had a RAID 5 directly attached to the Dell RAID controller. Something else is going on here. Run `multipathd -k` and run `show config` and `show paths` and paste the output. Also pastebin the output from `megacli -AdpAllInfo -aALL`. You may have to download MegaCLI. – Michael Hampton Jan 13 '13 at 02:46
  • @MichaelHampton: I've added the output that you requested. – dhinckley Jan 13 '13 at 03:55

3 Answers3

1

You can blacklist the drive and multipath will skip it. Put:

blacklist {
devnode "sd[a-b]"
}

defaults {
user_friendly_names yes
}

in /etc/multipath.conf and reboot. It looks like everything with the filesystem is intact, so do not be worried about it. When you issue lsof it should be on the partition, not on the whole device (lsof /dev/sdb1 not lsof /dev/sdb). Same for fuser. However, try blacklist first, as it may just be what you need.

grs
  • 2,235
  • 6
  • 28
  • 36
  • Thank you for the response, @grs. My /etc/multipath.conf file actually already contains the line `devnode "sd[a-b]$"`. as well as the `user_friendly_names yes`. Is that dollar sign problematic? Interestingly, the `show config` command does not appear to include sd* devices (see the main body of my question, edited). – dhinckley Jan 13 '13 at 03:59
  • @dhinckley - I must be blind but I can't see `sd[a-b]` in your `multipathd show config` output. `hd[a-b]` are blacklisted, but not `sd[a-b]`. – grs Jan 13 '13 at 05:47
  • If you are blind, then so am I! I don't see it either. I've edited my question to also contain the contents of my `/etc/multipath.conf` file. – dhinckley Jan 13 '13 at 22:29
  • I replaced the command `blacklist_devnode { devnode sd[a-b]* }` with `blacklist { devnode sd[a-b]* }` and `sd[a-b]` now shows up in `multipathd show condig`. Problem solved!... though I'm not quite sure why the first command didn't work. @grs: I'll upvote your answer when I have sufficient reputation. – dhinckley Jan 14 '13 at 16:55
  • Important: devnode takes a regular expression. You want to use `^sd[a-b]$`, otherwise it will exclude more stuff than you want... e.g. `sd[a-b]` will match `sdaa`, `sdbz`, (and also `sdsda`), etc... – Gert van den Berg Aug 10 '17 at 08:53
0

In your /etc/multipath.conf file change:

devnode_blacklist {
devnode "sd[a-b]$"
...
}

to:

devnode_blacklist {
devnode "sd[a-b]*"
...
}



This will blacklist /dev/sdb1 whereas your current configuration does not blacklist /dev/sdb1

  • Only making this change didn't fix my problem. I had to change `devnode_blacklist {...}` to `blacklist {...}`. – dhinckley Jan 14 '13 at 17:47
  • devnode takes regulr expressions... `sd[a-b]` means `sd` followed by 0 or more characters in the range `a-b`. A partial match is sufficient, so your advice would result in all devices containing `sd` being blacklisted (all SCSI devices). (Multipathing is also on the device level, not the partition level) – Gert van den Berg Aug 10 '17 at 08:59
0

Maybe this helps. In my case, I had problems with multipath and "Device or resource busy". I used multipath -l to list all the mappings. Then I removed the mappings one after one with multipath -f <MAPPING NAME>. You probably could use multipath -F to remove them all at once. Then I was able to create the RAID.

I must note, those hard disks were zeroed out and I just created one big GPT partition on each of those. The blacklisting in /etc/multipath.conf seems plausible as well. You don't want multipath to touch your hard disks when you put them into an array.

AdamKalisz
  • 107
  • 4