1

I've got an HP X1600 with Solaris 11 installed on it. It's got a P212 SAS controller with a single external port.

I've got 2x 10k 2.5" SAS drives installed and configured as RAID 1 on the controller which act as the system disk. I've then got 12x 7.2k 1TB 3.5" SATA drives plugged into the front of the chassis, each is individually configured as a single RAID0 volume on the controller, in order to present Solaris with individual disks that ZFS can then use.

This all worked perfectly.

I subsequently acquired an D2700 and 12x 10k 2.5" 300GB SAS disks and racked that next to the X1600. I connected the D2700 to the P212 with a mini-SAS cable. Upon rebooting the X1600, the P212 saw all of the drives, and I configured each 2.5" SAS drive as a set of RAID0 volumes, similar to how I configured the SATA drives. In total, I now have 25 volumes:

  • 1x RAID 1 (2x 2.5" 10k disks) as internal system disk
  • 12x RAID0 volumes, effectively the 12 3.5" SATA disks
  • 12x RAID0 volumes, effectively the 12 2.5" SAS disks in the D2700

I've done a touch /reconfigure and a boofs -r from within grub, but upon running format I see the following output:

   0. c7t0d0 <HP     -LOGICAL VOLUME -2.50 cyl 7828 alt 2 hd 255 sec 63>
      /pci@0,0/pci8086,3410@9/pci103c,3241@0/sd@0,0
   1. c7t1d0 <HP-LOGICAL VOLUME-2.50-931.48GB>
      /pci@0,0/pci8086,3410@9/pci103c,3241@0/sd@1,0
   2. c7t2d0 <HP-LOGICAL VOLUME-2.50-931.48GB>
      /pci@0,0/pci8086,3410@9/pci103c,3241@0/sd@2,0
   3. c7t3d0 <HP-LOGICAL VOLUME-2.50-931.48GB>
      /pci@0,0/pci8086,3410@9/pci103c,3241@0/sd@3,0
   4. c7t4d0 <HP-LOGICAL VOLUME-2.50-931.48GB>
      /pci@0,0/pci8086,3410@9/pci103c,3241@0/sd@4,0
   5. c7t5d0 <HP-LOGICAL VOLUME-2.50-931.48GB>
      /pci@0,0/pci8086,3410@9/pci103c,3241@0/sd@5,0
   6. c7t6d0 <HP-LOGICAL VOLUME-2.50-931.48GB>
      /pci@0,0/pci8086,3410@9/pci103c,3241@0/sd@6,0
   7. c7t8d0 <HP-LOGICAL VOLUME-2.50-931.48GB>
      /pci@0,0/pci8086,3410@9/pci103c,3241@0/sd@8,0
   8. c7t9d0 <HP-LOGICAL VOLUME-2.50-931.48GB>
      /pci@0,0/pci8086,3410@9/pci103c,3241@0/sd@9,0
   9. c7t10d0 <HP-LOGICAL VOLUME-2.50-931.48GB>
      /pci@0,0/pci8086,3410@9/pci103c,3241@0/sd@a,0
  10. c7t11d0 <HP-LOGICAL VOLUME-2.50-931.48GB>
      /pci@0,0/pci8086,3410@9/pci103c,3241@0/sd@b,0
  11. c7t12d0 <HP-LOGICAL VOLUME-2.50-931.48GB>
      /pci@0,0/pci8086,3410@9/pci103c,3241@0/sd@c,0
  12. c7t13d0 <HP-LOGICAL VOLUME-2.50-931.48GB>
      /pci@0,0/pci8086,3410@9/pci103c,3241@0/sd@d,0
  13. c7t14d0 <HP-LOGICAL VOLUME-2.50-279.37GB>
      /pci@0,0/pci8086,3410@9/pci103c,3241@0/sd@e,0
  14. c7t15d0 <HP-LOGICAL VOLUME-2.50-279.37GB>
      /pci@0,0/pci8086,3410@9/pci103c,3241@0/sd@f,0
Specify disk (enter its number):

As you can see, it's seeing the system disk and the 12 SATA drives perfectly, but it's only seeing 2 of the 12 external SAS disks. There is no /dev/dsk/c7t16d0 device, and no other devices in /dev/dsk that would appear to be the other drives.

The P212 data from HP (http://h18004.www1.hp.com/products/servers/proliantstorage/arraycontrollers/index.html) indicates that the controller supports 54 drives, and in fact the controller BIOS sees the drives and allows me to configure them perfectly. It's just solaris that won't see them.

How do I solve this?

growse
  • 7,830
  • 11
  • 72
  • 114
  • Since all disks should be attached to the same driver that already exposes the 0-14 disks, check `prtconf -v` to see if it shows all instances. Also try `devfsadm -v` to create the links in /dev. If this doesn't work, it's probable that the SAS controller needs to be configured to expose the disks. – Giovanni Tirloni Jul 05 '11 at 00:00
  • `prtconf -v` only shows up to c7t15d0, so that doesn't seem to see the extra drives. `devfsadm -v` runs, waits and returns nothing, with nothing apparently changed within `/dev/dsk`. Given the controller has configured a total of 25 logical drives, I'm not sure what else I need to do to tell the controller to expose them to the OS? – growse Jul 05 '11 at 10:01
  • How did this setup work out in practice? You typically want to use a generic SAS HBA instead of a Smart Array controller when using ZFS so you can avoid having to create multiple RAID 0 logical disks. E.g. hot-plugging a drive will require a restart to recognize its replacement. See: http://serverfault.com/questions/84043/zfs-sas-sata-controller-recommendations – ewwhite Aug 27 '11 at 20:05
  • In practice I think I'm going to have to reboot whenever I add/change devices. It's annoying, but until I get budget to replace the array controller, I'll have to live with it. – growse Aug 27 '11 at 22:13

1 Answers1

2

I solved it.

It turns out that you need to configure the /kernel/drv/sd.conf file to look beyond the first 16 targets on LUN0. To do this, I added the following lines:

name="sd" class="scsi" target=16 lun=0;
name="sd" class="scsi" target=17 lun=0;
name="sd" class="scsi" target=18 lun=0;
name="sd" class="scsi" target=19 lun=0;
name="sd" class="scsi" target=20 lun=0;
name="sd" class="scsi" target=21 lun=0;
name="sd" class="scsi" target=22 lun=0;
name="sd" class="scsi" target=23 lun=0;
name="sd" class="scsi" target=24 lun=0;
name="sd" class="scsi" target=25 lun=0;

and issued a reboot -- -rv. I can now see the drives and have configured them.

growse
  • 7,830
  • 11
  • 72
  • 114