4

Context

I have a server freshly provisionned with CentOS 7, connected to a Disk Array with a Fibre Channel connection.

I want to mount the Disk Array's disks on my server's file system, and then setup a NFS server on it to make this storage available to all the nodes in the cluster (both the server and the disk array are part of a little cluster I'm managing).

Server : Bull R423

Disk Array : DDN S2A6620 (DirectData Network)

I only use one of both controllers of the disk array.

Here is an excerpt of command lspci output:

# lspci
85:00.0 Fibre Channel: Emulex Corporation Saturn-X: LightPulse Fibre Channel Host Adapter (rev 03)
85:00.1 Fibre Channel: Emulex Corporation Saturn-X: LightPulse Fibre Channel Host Adapter (rev 03)

So I think my server detects well the FC HBA (Fibre Channel Host Bus Adapter), which seem to be from Emulex brand.

The disk array is compatible with Red Hat 5&6 servers, so I'm not sure whether it can actually work with a CentOS 7 server, but I decided to give it a try.

I've followed the user guide of the Disk Array, I've been able to connect to it remotely from the server, and I have done all the necessary configuration (Creating a Raid 1 storage pool of 2 disks, creating a virtual disk from that pool to present the disks to the server, present the virtual disk to the host with a LUN number, ...). But then the user guide doesn't say anything about the server-side.


Disk Array configuration

Some details about how I performed the configuration on the disk array side.

The disk array OS is SFA OS v1.3.0.8 . The closest manual I found is this one (v1.4.0). Basically here are the step I followed (corresponding to section 3.6.5 of the document, and following):

  1. Cleaning the disk array

$ application delete presentation * $ application delete host * $ delete virtual_disk * $ delete pool *

  1. Create a storage pool

$ create pool raid_level=raid1 number=2

number stands for the number of disks of the pool. The created pool has id 7.

  1. Create a virtual disk based on that pool

$ create virtual_disk capacity=max pool=7

The virtual disk is based on the pool I just created and uses all its storage capacity.

  1. Create a host object corresponding to my server:

$ application create host name=io1 ostype=linux

  1. Import a discovered initiator into a relationship with the host:

$ app show discovered * | Initiator Identifier | | Index | Type | ID | node | port | Ctrl 0 | Ctrl 1 | 00003 FC 0x000001 0x20000000c99de40f 0x10000000c99de40f 1 Total FC Initiators: 1 There is only one discovered initiator, with id 3. It corresponds to one of my server's Fiber Channel host:

$ cat /sys/class/fc_host/host10/port_name 0x10000000c99de40f

It is associated to controller 1 of the disk array, which is actually the only controller I'm using.

$ application import discovered_initiator 3 host 3

  1. Present a virtual disk to the host

$ application create presentation virtual_disk 7 host 3

(The id of the virtual disk I created is 7)

Both the virtual disk and the storage pool appear to be in Ready state.


Problem

Now that I've supposedly presented the disks to my server, I want to mount that storage space as a filesystem on my server.

I've tried to check in /dev/ directory. Only sda disk is mounted for now (my server hard drive). I inquired about every single file in /dev/, and found a few that might have something to do with Fibre Channel or Scsi:

  • /dev/bsg/ is a directory dedicated to the linux SCSI generic driver, containing /dev/bsg/fc_host9 and /dev/bsg/fc_host10;
  • /dev/lpfcmgmt is dedicated to an Emulex driver;
  • /dev/tgt, used by SCSI target.

I installed sg3_utils, and run a scan on fc_host10:

$ sg_scan /dev/bsg/fc_host10
/dev/bsg/fc_host10: scsi0 channel=0 id=0 lun=0 [em]

After I ran that scan command, I still couldn't find aditional /dev/sd*.

Furthermore, /sys/class/fc_host/host10/ is a link to /sys/devices/pci0000:80/0000:80:07.0/0000:85:00.1/host10/fc_host/host10, so I guess that gives me kind of an 'ID' of the bus.

But here is the list of files in directory /dev/disk/by-path:

$ ll /dev/disk/by-path
total 0
lrwxrwxrwx. 1 root root  9 Aug  3 22:02 pci-0000:84:00.0-scsi-0:0:0:0 -> ../../sda
lrwxrwxrwx. 1 root root 10 Aug  3 22:02 pci-0000:84:00.0-scsi-0:0:0:0-part1 -> ../../sda1
lrwxrwxrwx. 1 root root 10 Aug  3 22:02 pci-0000:84:00.0-scsi-0:0:0:0-part2 -> ../../sda2

The ID's don't match, and anyway those are symbolic links to /dev/sda*, which correspond to my server local disk.

As suggested by billyw, I ran

echo '- - -' > /sys/class/scsi_host/host10/scan

but it didn't output anything, and there still wasn't any new /dev/sd* appearing.


Questions

I'm assuming upon success the disks should appear as some /dev/sd*/. Is that true? If not, where should those disks appear?

Finally, how do I make these disks appear on my server point of view?


EDIT

Following billyw's advice, I ran echo 1 > /sys/class/fc_host/hostX/issue_lip. Here are the logs.

Apparently the FLOGI errors are not relevant here since I'm in a loop topology, not a fabric topology. Still, no disk where appearing in /dev.

Now following this thread, I restarted the lpfc driver:

$ modprobe -r lpfc
$ modprobe lpfc

Which resulted in these logs in /var/log/messages.

This time, /dev/sdb and /dev/sdc appeared. But I couldn't mount them:

$ mount /dev/sdb /mnt/db
mount: /dev/sdb is write-protected, mounting read-only
mount: unknown filesystem type '(null)'

So I tried to investigate the logs generated when restarting lpfc. First, I noticed that Link Up Event npiv not supported in loop topology message. I restarted lpfc, disabling npiv this time (I think npiv is useless in my case):

$ modprobe -r lpfc
$ modprobe lpfc lpfc_enable_npiv=0

Logs are quite the same, but the npiv message disappeared.

I'm still investigating the logs, next error on my TODO list is Warning! Received an indication that the LUN assignments on this target have changed. The Linux SCSI layer does not automatically remap LUN assignments..

Elouan Keryell-Even
  • 453
  • 2
  • 8
  • 20
  • 1
    Is the FC HBA recognized by the system? Are you using zoning in your FC environment? – Sven Aug 04 '15 at 15:20
  • I added the output of `lscpi` to the question. It seems to me that the server is able to detect the FC HBA. As to the zoning, I don't know anything about that, I have near to zero experience with Fibre Channel, sorry. Maybe you could help me by telling me how to check if zoning is used or not? Anyway thanks for your attention :) – Elouan Keryell-Even Aug 04 '15 at 15:48
  • 1
    Did the vendor-specific instructions include any steps for enabling the server's access to the disk array, using WWNNs or WWPNs, or something similar? – billyw Aug 04 '15 at 16:15
  • @billyw I made a quick search in the user guide. I found no trace of WWNN, but found something about WWPN. There is a command to create an application initiator for a specified host: `application create initiator= wwpn=`. I didn't use that command though, because it didn't seem necessary. But I will try that tomorrow, maybe that's the solution. – Elouan Keryell-Even Aug 04 '15 at 16:29
  • 1
    `mount /dev/sdb /mnt/db` probably won't work since, if these are new volumes from your SAN, there's likely no partition table or filesystem. You'd need to use `fdisk`/`parted` or some other tool to create the partitions, then some variant of `mkfs` to create the filesystem. – GregL Aug 07 '15 at 14:05

2 Answers2

2

To have the disks appear under /dev/sd*, the solution was to restart the Fibre Channel HBA driver, as stated in this answer. In my case, it was lpfc :

stop lpfc driver:

$ modprobe -r lpfc

start lpfc driver:

$ modprobe lpfc

Then, my device appeared under /dev/sdb. After that, as stated by GregL, I needed to partition the device, and then format it with a given file system.

Following that thread:

  1. I created a GPT partition table on the disk, using parted mklabel's command.

$ parted /dev/sdb mklabel gpt

  1. Then I created a primary partition occupying all space on that device (from 0% to 100%) with optimal alignment, using parted mkpart's command:

$ parted --align optimal /dev/sdb mkpart primary 0% 100%

That gave me a partition /dev/sb1.

  1. Afterwards, I formatted the partition with filesystem xfs (I was already using that filesystem for my other partitions):

$ mkfs.xfs /dev/sdb1

  1. Finally, I mounted the partition:

mount /dev/sdb1 /mnt/disk_array/

And now everything works fine :)

Elouan Keryell-Even
  • 453
  • 2
  • 8
  • 20
0

I'm basing my answer on this documentation for your product, particularly section 3.12. I don't own your product, so trust the documentation more than you trust me.

You need to configure the DDN storage array so that the server is authorized to access it. In your case, the term initiator refers to the Fibre Channel HBA on the server, and the term target refers to the ports on the DDN storage array which will present the LUNs.

To summarize the steps:

  • Show the initiators that the storage array detects, with APPLICATION SHOW DISCOVERED_INITIATOR *
  • Create a host (which appears to be just a form of labeling), with APPLICATION CREATE HOST INDEX=<index> NAME=<host name> OSTYPE=<ostype>
  • Map the host to an initiator, with APPLICATION IMPORT DISCOVERED_INITIATOR=<initiator_id> HOST=<host-id>
  • Check that the mapping is correct, with APPLICATION SHOW INITIATOR *
  • Present the virtual disks to the host, with APPLICATION CREATE PRESENTATION INDEX=<index> HOST=<host> VIRTUAL_DISK=<vd-id> LUN=<lun-id>
  • (There is an alternative promiscuous mode for presenting a virtual-disk to all host ports, but it comes with a warning)
  • Check that the presentation is correct, with APPLICATION SHOW PRESENTATION *

Server-side, you should be able to re-scan for the LUNs with the following (replacing the X with the HBA number):

echo '- - -' > /sys/class/scsi_host/hostX/scan

If you plan to do more with Fibre Channel in the future, and don't want to live in the Wild West, I'd also recommend learning about zoning.

billyw
  • 1,552
  • 15
  • 25
  • I was basing myself on SFA OS v1.4.0 user guide (instead of v1.5.1) since my Disk Array is running v1.3.0.8. Anyway, the syntax seems to be the same in both manual, and the procedure I followed is the same as the one you described. Except I was presenting the virtual disk to all hosts, but as you pointed it it comes with a warning so now I changed to a 1-host sharing. But, `echo '- - -' > /sys/class/scsi_host/hostX/scan` isn't outputting anything, and I can't find any new disk appearing under `/dev/sd*`. I largely edited my question to add details, and I read wikipedia's page about zoning :) – Elouan Keryell-Even Aug 05 '15 at 15:46
  • link to the guide I used : http://www.ddn.com/pdfs/S2A6620_1.4.0_User_Guide_H.pdf – Elouan Keryell-Even Aug 05 '15 at 15:46
  • 1
    You could try issuing a LIP (Loop Initialization Protocol), with `echo 1 > /sys/class/fc_host/hostX/issue_lip`, and then re-scan again. The LIP process is asynchronous, so you'll want to tail your log to see when it completes. – billyw Aug 05 '15 at 17:18
  • hey, I edited my post with the little investigation work I made yesterday and today. I'll start back investigating next monday :) – Elouan Keryell-Even Aug 07 '15 at 13:57