Context
I have a server freshly provisionned with CentOS 7, connected to a Disk Array with a Fibre Channel connection.
I want to mount the Disk Array's disks on my server's file system, and then setup a NFS server on it to make this storage available to all the nodes in the cluster (both the server and the disk array are part of a little cluster I'm managing).
Server : Bull R423
Disk Array : DDN S2A6620 (DirectData Network)
I only use one of both controllers of the disk array.
Here is an excerpt of command lspci
output:
# lspci
85:00.0 Fibre Channel: Emulex Corporation Saturn-X: LightPulse Fibre Channel Host Adapter (rev 03)
85:00.1 Fibre Channel: Emulex Corporation Saturn-X: LightPulse Fibre Channel Host Adapter (rev 03)
So I think my server detects well the FC HBA (Fibre Channel Host Bus Adapter), which seem to be from Emulex brand.
The disk array is compatible with Red Hat 5&6 servers, so I'm not sure whether it can actually work with a CentOS 7 server, but I decided to give it a try.
I've followed the user guide of the Disk Array, I've been able to connect to it remotely from the server, and I have done all the necessary configuration (Creating a Raid 1 storage pool of 2 disks, creating a virtual disk from that pool to present the disks to the server, present the virtual disk to the host with a LUN number, ...). But then the user guide doesn't say anything about the server-side.
Disk Array configuration
Some details about how I performed the configuration on the disk array side.
The disk array OS is SFA OS v1.3.0.8 . The closest manual I found is this one (v1.4.0). Basically here are the step I followed (corresponding to section 3.6.5 of the document, and following):
- Cleaning the disk array
$ application delete presentation *
$ application delete host *
$ delete virtual_disk *
$ delete pool *
- Create a storage pool
$ create pool raid_level=raid1 number=2
number
stands for the number of disks of the pool. The created pool has id 7.
- Create a virtual disk based on that pool
$ create virtual_disk capacity=max pool=7
The virtual disk is based on the pool I just created and uses all its storage capacity.
- Create a host object corresponding to my server:
$ application create host name=io1 ostype=linux
- Import a discovered initiator into a relationship with the host:
$ app show discovered *
| Initiator Identifier | |
Index | Type | ID | node | port | Ctrl 0 | Ctrl 1 |
00003 FC 0x000001 0x20000000c99de40f 0x10000000c99de40f 1
Total FC Initiators: 1
There is only one discovered initiator, with id 3. It corresponds to one of my server's Fiber Channel host:
$ cat /sys/class/fc_host/host10/port_name
0x10000000c99de40f
It is associated to controller 1 of the disk array, which is actually the only controller I'm using.
$ application import discovered_initiator 3 host 3
- Present a virtual disk to the host
$ application create presentation virtual_disk 7 host 3
(The id of the virtual disk I created is 7)
Both the virtual disk and the storage pool appear to be in Ready state.
Problem
Now that I've supposedly presented the disks to my server, I want to mount that storage space as a filesystem on my server.
I've tried to check in /dev/
directory. Only sda
disk is mounted for now (my server hard drive). I inquired about every single file in /dev/
, and found a few that might have something to do with Fibre Channel or Scsi:
/dev/bsg/
is a directory dedicated to the linux SCSI generic driver, containing/dev/bsg/fc_host9
and/dev/bsg/fc_host10
;/dev/lpfcmgmt
is dedicated to an Emulex driver;/dev/tgt
, used by SCSI target.
I installed sg3_utils
, and run a scan on fc_host10:
$ sg_scan /dev/bsg/fc_host10
/dev/bsg/fc_host10: scsi0 channel=0 id=0 lun=0 [em]
After I ran that scan command, I still couldn't find aditional /dev/sd*
.
Furthermore, /sys/class/fc_host/host10/
is a link to /sys/devices/pci0000:80/0000:80:07.0/0000:85:00.1/host10/fc_host/host10
, so I guess that gives me kind of an 'ID' of the bus.
But here is the list of files in directory /dev/disk/by-path
:
$ ll /dev/disk/by-path
total 0
lrwxrwxrwx. 1 root root 9 Aug 3 22:02 pci-0000:84:00.0-scsi-0:0:0:0 -> ../../sda
lrwxrwxrwx. 1 root root 10 Aug 3 22:02 pci-0000:84:00.0-scsi-0:0:0:0-part1 -> ../../sda1
lrwxrwxrwx. 1 root root 10 Aug 3 22:02 pci-0000:84:00.0-scsi-0:0:0:0-part2 -> ../../sda2
The ID's don't match, and anyway those are symbolic links to /dev/sda*
, which correspond to my server local disk.
As suggested by billyw, I ran
echo '- - -' > /sys/class/scsi_host/host10/scan
but it didn't output anything, and there still wasn't any new /dev/sd*
appearing.
Questions
I'm assuming upon success the disks should appear as some /dev/sd*/
. Is that true? If not, where should those disks appear?
Finally, how do I make these disks appear on my server point of view?
EDIT
Following billyw's advice, I ran echo 1 > /sys/class/fc_host/hostX/issue_lip
. Here are the logs.
Apparently the FLOGI
errors are not relevant here since I'm in a loop topology, not a fabric topology. Still, no disk where appearing in /dev
.
Now following this thread, I restarted the lpfc
driver:
$ modprobe -r lpfc
$ modprobe lpfc
Which resulted in these logs in /var/log/messages
.
This time, /dev/sdb
and /dev/sdc
appeared. But I couldn't mount them:
$ mount /dev/sdb /mnt/db
mount: /dev/sdb is write-protected, mounting read-only
mount: unknown filesystem type '(null)'
So I tried to investigate the logs generated when restarting lpfc
. First, I noticed that Link Up Event npiv not supported in loop topology
message. I restarted lpfc
, disabling npiv
this time (I think npiv
is useless in my case):
$ modprobe -r lpfc
$ modprobe lpfc lpfc_enable_npiv=0
Logs are quite the same, but the npiv
message disappeared.
I'm still investigating the logs, next error on my TODO list is Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments.
.