1

We are trying to upgrade an DL360 G6 ESXi host with a DL360p G8. The G6 has a Qlogic ISP2532 FC card inside and everything works fine. Since I don't have a spare FC card, I need to use the same card in the new server.

The problem I am having is that while the card seem to work fine in the new server, I do not see the storage. The whole SAN/FC switches is out of my hands, I have no access to it.

I am no SAN specialist but I thought this shouldn't matter. Obviously the card is the same, the WNNs are the same so in theory this shouldn't affect the zoning configuration on the switches.

If I move the FC card back in the old server, it works without problem and I can see the storage. This is what the kernel shows on the new server (when it's not working):

2016-04-11T13:36:25.937Z cpu7:33267)Loading module qlnativefc ...
2016-04-11T13:36:25.941Z cpu7:33267)Elf: 1865: module qlnativefc has license GPLv2
2016-04-11T13:36:25.979Z cpu7:33267)Device: 191: Registered driver 'qlnativefc' from 20
2016-04-11T13:36:25.979Z cpu7:33267)Mod: 4943: Initialization of qlnativefc succeeded with module ID 20.
2016-04-11T13:36:25.979Z cpu7:33267)qlnativefc loaded successfully.
2016-04-11T13:36:25.981Z cpu16:33266)qlnativefc: (7:0.0): scsi(0): MSI-X vector count from config space: 1f
2016-04-11T13:36:25.981Z cpu16:33266)qlnativefc: (7:0.0): scsi(0): MSI-X vector count usable: 2
2016-04-11T13:36:25.981Z cpu16:33266)qlnativefc: (7:0.0): scsi(0): Found an ISP2532, iobase 0x0x41000a200000
2016-04-11T13:36:25.981Z cpu16:33266)IntrCookie: 1915: cookie 0x14 moduleID 20 <qlnativefc (default)0> exclusive, flags 0x1d
2016-04-11T13:36:25.981Z cpu16:33266)IntrCookie: 1915: cookie 0x15 moduleID 20 <qlnativefc (rsp_q)0> exclusive, flags 0x1d
2016-04-11T13:36:25.981Z cpu16:33266)qlnativefc: (7:0.0): scsi(0): num_rsp_queues = 1, num_req_queues = 1
2016-04-11T13:36:25.981Z cpu16:33266)qlnativefc: (7:0.0): scsi(0): MSI-X: Enabled (0x2, 0x0).
2016-04-11T13:36:25.981Z cpu16:33266)qlnativefc: (7:0.0): scsi(0): Calling initialize adapter
2016-04-11T13:36:28.941Z cpu16:33266)qlnativefc: (7:0.0): scsi(0): FW: Loading via request-firmware...
2016-04-11T13:36:29.039Z cpu16:33266)qlnativefc: (7:0.0): scsi(0): Allocated (64 KB) for EFT...
2016-04-11T13:36:29.042Z cpu16:33266)qlnativefc: (7:0.0): scsi(0): Allocated (5414 KB) for firmware dump...
2016-04-11T13:36:29.054Z cpu16:33266)qlnativefc: (7:0.0): scsi(0): Setting Parameter to mark RDP ELS command as a passthrough ELS command
2016-04-11T13:36:29.066Z cpu16:33266)qlnativefc: (7:0.0): scsi(0): Enabling PUREX.
2016-04-11T13:36:29.078Z cpu16:33266)qlnativefc: (7:0.0): scsi(0): FAWWN feature : disabled.
2016-04-11T13:36:29.091Z cpu16:33266)qlnativefc: (7:0.0): scsi(0): Unable to read FCP priority data.
2016-04-11T13:36:29.101Z cpu16:33266)Device: 326: Found driver qlnativefc for device 0x74b0430275473f33
2016-04-11T13:36:29.103Z cpu16:33266)qlnativefc: (7:0.1): scsi(1): MSI-X vector count from config space: 1f
2016-04-11T13:36:29.103Z cpu16:33266)qlnativefc: (7:0.1): scsi(1): MSI-X vector count usable: 2
2016-04-11T13:36:29.103Z cpu16:33266)qlnativefc: (7:0.1): scsi(1): Found an ISP2532, iobase 0x0x41000a1b8000
2016-04-11T13:36:29.103Z cpu16:33266)IntrCookie: 1915: cookie 0x16 moduleID 20 <qlnativefc (default)1> exclusive, flags 0x1d
2016-04-11T13:36:29.103Z cpu16:33266)IntrCookie: 1915: cookie 0x17 moduleID 20 <qlnativefc (rsp_q)1> exclusive, flags 0x1d
2016-04-11T13:36:29.103Z cpu16:33266)qlnativefc: (7:0.1): scsi(1): num_rsp_queues = 1, num_req_queues = 1
2016-04-11T13:36:29.103Z cpu16:33266)qlnativefc: (7:0.1): scsi(1): MSI-X: Enabled (0x2, 0x0).
2016-04-11T13:36:29.103Z cpu16:33266)qlnativefc: (7:0.1): scsi(1): Calling initialize adapter
2016-04-11T13:36:30.053Z cpu0:33098)qlnativefc: (7:0.0): scsi(0): LOOP UP detected (4 Gbps).
2016-04-11T13:36:30.226Z cpu2:33269)qlnativefc: (7:0.0): scsi(0): SNS scan failed -- assuming zero-entry result...
2016-04-11T13:36:31.698Z cpu16:33266)qlnativefc: (7:0.1): scsi(1): FW: Loading via request-firmware...
2016-04-11T13:36:31.796Z cpu16:33266)qlnativefc: (7:0.1): scsi(1): Allocated (64 KB) for EFT...
2016-04-11T13:36:31.798Z cpu16:33266)qlnativefc: (7:0.1): scsi(1): Allocated (5414 KB) for firmware dump...
2016-04-11T13:36:31.811Z cpu16:33266)qlnativefc: (7:0.1): scsi(1): Setting Parameter to mark RDP ELS command as a passthrough ELS command
2016-04-11T13:36:31.823Z cpu16:33266)qlnativefc: (7:0.1): scsi(1): Enabling PUREX.
2016-04-11T13:36:31.835Z cpu16:33266)qlnativefc: (7:0.1): scsi(1): FAWWN feature : disabled.
2016-04-11T13:36:31.847Z cpu16:33266)qlnativefc: (7:0.1): scsi(1): Unable to read FCP priority data.
2016-04-11T13:36:31.860Z cpu16:33266)Device: 326: Found driver qlnativefc for device 0x6eeb43027547401e
2016-04-11T13:36:32.812Z cpu2:33100)qlnativefc: (7:0.1): scsi(1): LOOP UP detected (4 Gbps).
2016-04-11T13:36:33.234Z cpu10:33271)qlnativefc: (7:0.1): scsi(1): SNS scan failed -- assuming zero-entry result...

Someone suggested to me that the zoning is somehow related to the server name and that I should change the the new server's name to reflect the old one. I am not sure how this is related to anything but in a last attempt, I changed the ESXi hostname. It didn't help.

Is there something I am missing ? Does the zoning configuration on the switches have to be redone ? If so, why ?

Any help is appreciated.

Regards, Stefan

  • Is NPIV involved, or did you put datastores on the LUNs in question? Check the cable too. The WWN is as the MAC address hardware dependant. – fuero Apr 11 '16 at 12:08
  • We're not making any changes to the datastores. As I mentioned, the card works fine in the old server, we see the storage, we see the datastores, all VMs run fine. All we did is move the card to the new server, used the same cables. The card shows Link up on its LEDs on the new server. The only difference I am seeing is that on the old server the ports in vSphere client are seen as vmhba1 and vmhba2. On the new server they are seen as vmhba2 and vmhba3. Is that relevant ? – Stefan Radovanovici Apr 11 '16 at 12:10
  • Unfortunately my storage person is not available at the moment, I was trying to do as much research as I can. When we did look yesterday, the switch port (in the GUI) showed as online but the zoning info was gone for whatever reason. When the card is in the old server, the zoning info is suddenly there on the switch port and I can see the storage. – Stefan Radovanovici Apr 11 '16 at 12:15
  • Well, add some details on your storage/switch vendor and model. As far as I can tell from what you posted, the client side works as it should be. Add the cabling info and port status info to the question and delete the comments. I'll dwlete mine then too to remove the clutter ;-) – fuero Apr 11 '16 at 12:19
  • To my knowledge, we're not using NPIV. Unfortunately I have no access to the FC switches (I know they are Brocade) nor to the SAN (HP EVA) so I can't give exact details right now. I'll try to find out more when the storage guy is available again. I just wanted to know if moving the FC card to another server should have any impact or not from the FC switch point of view. I thought it shouldn't matter, it should be irrelevant for the switch in which server the card is. Obviously that's not the case here. – Stefan Radovanovici Apr 11 '16 at 12:24

1 Answers1

0

Try running Hardware Diagnostics on the HBA cards. If the hardware shows up as green then update the drivers to the latest version by contacting the hardware vendor.

Anon
  • 1,210
  • 10
  • 23