1

I have a ML350 G5 that I'm thinking of repurposing to save money. I'm looking to install FreeNAS but it (ZFS) doesn't play nice with the HP e200i card that's part of the motherboard from what I've read. I'd like to buy a good, used pcie x4 / x8 RAID card for cheap and connect it directly to the backplane, allowing me to continue using the LFF cage for my drives.

The backplane appears to use 2ea 4 lane SAS cables with sff-8484 connectors on both ends - can I disconnect one and using a breakout cable, reroute to my add-in RAID card? In my mind, that would allow me to electrically split the cage in half - 3 drives using the e200i, 3 drives using the new card.

I have no idea how much logic is part of a RAID backplane or a HP backplane. I don't know if it's a "dumb" component that only makes an electrical connection from the HD to the RAID controller or if it's "smart", performing logic functions that basically makes it proprietary compatible.

thoughts? thanks!

  • Why are you interested in splitting the backplane between controllers? – ewwhite Jan 02 '15 at 01:11
  • you threw a lot of new knowledge in your answer - lemme back up to where I was before you did. 1. my pcie slot is x4 1.0 which has limited throughput 2. I was under the impression FreeNAS wouldn't work through the e200i 3. I wanted to make sure the extra RAID card I put in would have all available bus bandwidth for my virtual FreeNAS install to answer some of your other questions; BBU on the e200i is brand new - am going to be running ESXI and a couple VM's, am only using Debian Wheezy right now – Rich Barrett Jan 02 '15 at 01:25

1 Answers1

0

If I were dealing with that model/vintage of server (circa 2005-2008), I would probably make use of the existing setup... A few points:

  • The 6-disk 3.5" backplane in a G5 ML350 is a dumb component. There's no RAID logic or SAS expansion built in.
  • You can connect this backplane and cage to any RAID controller or SAS HBA, provided you use the right cabling. SFF-8484 on the backplane side, and possibly SFF-8087 on the controller side, if you use a newer controller.
  • This is old hardware, so understand the limits of your PCIe slots, SAS bandwidth (3.0Gbps)
  • If you use SATA drives, the link speeds will be capped at 1.5Gbps per disk if you use a period-correct HP Smart Array controller (E200i, P400, P800).

What would I do?

  • I'd drop FreeNAS. It's not that great a solution, and you'll lose some of the HP ProLiant platform monitoring features. The on-disk ZFS format under FreeNAS is a bit quirky, too... FreeNAS has been the fodder of a few WTF ServerFault questions.
  • Instead, ZFS-on-Linux or an appliance package that leverages it would be a better option. Check out the free Community Edition of QuantaStor or ZetaVault.

Finally, for this scale of hardware, it makes sense to just use your existing HP Smart Array E200i controller.

  • If you take the approach of a ZFS-focused OS and a JBOD-capable controller or HBA, you'll have to allocate disks for the OS, as well as the data. That's a potential waste of disk space. If you approach this with partitions or slices of the disks, your ZFS configuration will become extremely complex and fraught.
  • The E200i is a capable controller and you'll have the benefit of a write cache (if the RAID battery is present and healthy).
  • If you really want to use ZFS, you can do so on top of a hardware RAID controller. I do this all the time in order to provide some ZFS features (snapshots, compression, etc.) while still having the ease and flexibility of hardware array monitoring.
  • HP Smart Array controllers can be configured to provide multiple logical drives (block devices) from a group of disks (an "Array"). In the example below, I configured the E200i in an ML350 G5 server with 4 500GB SATA disks to provide a 72GB OS drive and 240GB and 200GB drives to be used as separate ZFS zpools.

    Smart Array E200i in Slot 0 (Embedded)    (sn: QT8CMP3716     )
    
    Internal Drive Cage at Port 1I, Box 1, OK
    
    Internal Drive Cage at Port 2I, Box 1, OK
    array A (SATA, Unused Space: 0  MB)
    
    
      logicaldrive 1 (72.0 GB, RAID 1+0, OK)
      logicaldrive 2 (240.0 GB, RAID 1+0, OK)
      logicaldrive 3 (200.0 GB, RAID 1+0, OK)
    
      physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SATA, 500 GB, OK)
      physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SATA, 500 GB, OK)
      physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SATA, 500 GB, OK)
      physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SATA, 500 GB, OK)
    

zpool status output

  pool: vol1
 state: ONLINE
  scan: scrub repaired 0 in 1h33m with 0 errors on Thu Jan  1 09:19:21 2015
config:

    NAME                                       STATE     READ WRITE CKSUM
    vol1                                       ONLINE       0     0     0
      cciss-3600508b1001037313620202020200007  ONLINE       0     0     0

errors: No known data errors

  pool: vol2
 state: ONLINE
  scan: scrub repaired 0 in 2h3m with 0 errors on Thu Jan  1 09:49:35 2015
config:

    NAME        STATE     READ WRITE CKSUM
    vol2        ONLINE       0     0     0
      cciss-4542300b6103731362020202067830007      ONLINE       0     0     0
ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • I plan on running ESXI and hosting a FreeNAS VM, a Kolab VM and maybe a Debian desktop ... is for my business which I just started and it will be in production but on a very small scale. By the time I'm ready to scale up, I'll have the $$$ for something new(er) - right now I have to allocate my money elsewhere. – Rich Barrett Jan 02 '15 at 01:34
  • btw, thank you for the answer and leads to other open source NAS software. am researching them now. – Rich Barrett Jan 02 '15 at 01:36
  • If you are going to use ESXi, just use ESXi and VMFS on top of the HP hardware RAID. There's no need to add FreeNAS into the mix or attempt any level of RAID controller/SAS passthrough. – ewwhite Jan 02 '15 at 01:37
  • well I run multiple subnets and have a scenario where I need to dictate share access / visibility based on the NIC / subnet the request is coming from. I was under the impression I could do that with FreeNAS. One NIC is my personal VLAN and the other is the Business / Public -- btw, doesn't the e200i have a drive size limitation of like 2TB? – Rich Barrett Jan 02 '15 at 01:52
  • @RichBarrett Are you planning to virtualize FreeNAS? – ewwhite Jan 02 '15 at 02:13
  • yes ... from what I've read, it works fine under ESXI. agree? disagree? btw, I only planned on running 2ea. 3TB sata drives which is more than plenty for me now - I use about 1TB of storage distributed over 3 machines that I want to consolidate. – Rich Barrett Jan 02 '15 at 02:21
  • Then just use FreeNAS as a VM. Don't bother with any additional controllers or passthrough. – ewwhite Jan 02 '15 at 02:22
  • after further research, I have decided to run FreeNAS as the host OS and everything / anything else inside a jail which can be done from within the FreeNAS interface. thank you – Rich Barrett Jan 02 '15 at 16:23