1

I've got an older machine (HP DL180 G6) using an HP SmartArray controller (model P410) with 12 drives connected to it. I was not all that interested in the controller's functions, as I wanted to set up a ZFS array, but I found out too late the controller had no passthrough mode.

As a workaround, I created 12 logical "RAID 0" volumes - one for each drive. This setup has worked well for about 3 years now.

The controller has started to show signs of failure, so I want to take this opportunity to move to a plain old SATA HBA now that the funds are available.

After swapping out the the controller for the HBA, will I need to take other steps to have my drives readable, or will it "just work"? (In other words: Did the SmartArray do anything to the data structures that would render the data unreadable to something else?)

Mikey T.K.
  • 1,367
  • 2
  • 15
  • 29

2 Answers2

3

For a DL180 G6, you have a couple of options:

  • Continue to use your multiple RAID 0 arrays - The problem with this is that a drive failure is essentially a Logical Drive failure, and would probably require a reboot to recognize a replacement disk.

  • Upgrade to a Smart Array P420 or H220 or H240. The P420 can be placed in "HBA mode". The H220 and H240 are HBAs (LSI chipsets). This will give you the raw disk access you're asking for.

  • Screw it and just make a hardware RAID array of the level you desire (RAID 1+0), create a small logical drive for your OS (sda) and another large logical drive that can be consumed by your zpool. This gives you ZFS volume management and flexibility, but hardware RAID, easy drive replacement, monitoring and a flash/battery-backed write cache.

People on the internet will say "no, don't do this... ZFS wants raw disks", but in reality, this maximizes your disk space because you don't need to allocate OS disks. HP hardware RAID is very resilient. Write cache is nice to have. ZFS is really best suited for the flexibility and performance enhancements of lz4 compression and ARC/L2ARC. If you're not in a position to have proper ZIL SLOG devices and a really well architected setup, the ZFS purist raw disk thing isn't as crucial.

ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • I do like the idea of keeping the RAID controller around, to be honest, just due to flexibility if nothing else. Is the upgrade from P410->420 a plug and play affair? – Mikey T.K. Apr 14 '17 at 23:00
  • 1
    It's plug/play. But if you restructure this, you will have to lose the data on the disks. – ewwhite Apr 14 '17 at 23:07
  • It may not be specific to this card, but most modern hardware RAID upgrade paths allow you to "reshape" arrays to conform to new table and metadata specifications without losing data. But in this case, you should be able to just drop the 420 in and profit. – Spooler Apr 15 '17 at 20:39
1

No, it can't Not in normal situations anyways. What you've created there is a metadata and specialized partitioning layer on that disk that the RAID controller is then creating partitions on that it joins into abstractions. None of this is the "normal" partitioning and disk metadata that operating systems are capable of reading directly. Since that's the case, you would need to read the data on that disk at specific offsets in addition to being able to read whatever format the volume that lies underneath the traditional disk structures is.

It's typically easiest to back this system up, then restore it to new disks and a new HBA that doesn't put abstractions in the way of disk access.

Spooler
  • 7,016
  • 16
  • 29
  • Yes, you can read them with linux and a normal HBA, as long as no special feature key was used. – John Keates Apr 15 '17 at 02:08
  • True, but you'd still have to build an abstraction for it before it would be readable with a likely combination of `losetup`, `dm-setup`, and `mdadm --build` with knowledge of the exact RAID parameters used to create the array. That's not exactly ideal in production, so best to find a way to get rid of the structures altogether if planning to move to a DAS solution. Otherwise, you could just go all in with hardware RAID and present abstractions with a new card that is within the upgrade path, as @ewwhite mentioned. – Spooler Apr 15 '17 at 20:36
  • 3
    No, mdadm reads HP's SmartArray spec just fine, as long as no 'feature key' options were used when the array was created. The metadata is on-disk and can be read fine. Even better: with single-disk RAID0 (i.e. closest you can get to pass-through on some controllers), all FS-data is directly accessible on the disk, you can even zero out the controller metadata. Same goes for many LSI controllers, Adaptec, Areca, 3Ware... – John Keates Apr 15 '17 at 20:39
  • That's fairly dope. I didn't know mdadm could do that with those abstractions. Then again, the last time I used mdadm directly in any sort of advanced capacity was years ago. I'm definitely going to go abuse some RAID controllers for fun just to watch it work. – Spooler Apr 15 '17 at 20:42