4

I am a c++ programmer and database administrator looking to expand my knowledge of server administration and maintenance. I have read the Wikipedia pages and several other documents I found by googling, but there are still a few things I don't understand.

  1. Consumer-level hardware comes with multiple head for SATA and PATA connections, and you are expected to buy cables to connect these to your drives. In server hardware, there is a lot of talk about backplanes. If I buy a rackmount server, like a Dell PowerEdge, can I expect it to have all the needed connectors, so I can just slot in my SAS or SATA drives?

  2. How do the drives work with / without additional RAID controllers? If I plan on running ZFS or some other kind of software RAID, it seems that an expensive raid controller may be an unnecessary upsell.

  3. How do external SAS boxes present the drives to the system? For example, a Sun J4200 http://www.sun.com/storage/disk_systems/expansion/4200/specs.xml claims to feature '4 (x4-wide) SAS host/uplink ports and 2 (x4-wide) SAS host/expansion ports'. Assuming the 'expansion' ports are used to daisy-chain multiple boxes together, does that mean that only sixteen (4 * 4 wide) drive can be visible to the system?

  4. To connect such a box to a system, I assume you need some kind of external SAS connector on the server. Are those normally standard on a system, or do you normally need to use SAS RAID adaptors that specifically provide external SAS ports?

frnknstn
  • 151
  • 6

2 Answers2

8

Regarding Backplanes

It varies from vendor to vendor, but in general backplanes are not compatible with of the shelf hard-drives. Many need some kind of drive carrier that has the built in interface between the SAS connector and the backplane connector. This is because these kinds of systems are hot-plug, and that requires special bits.

Regarding RAID controllers

Hardware RAID provides a level of parallel processing that can come very much in handy, as well as handling certain tasks better than software RAID can. One area is the on-adapter cache, which allows the RAID card to better virtualize the underlaying storage so it performs better. Software RAID can do some of that, but hardware RAID still performs better these days. Also, in my experience HW RAID handles failures more gracefully than SW RAID. Yours may vary.

Regarding RAID and ZFS

This is going to sound a bit odd, but I run into the same issues with NetWare's NSS file-system (which looks a lot like ZFS as it happens). In my case I trust the hardware vendors more to handle complex storage configs than I trust the software vendors to provide solid solutions. This may be misplaced trust, but I'd rather have a storage management system with several largish RAID arrays, than one with 48 individual disk drives. This allows me to leverage the best of both environments.

I can go into some detail about load leveling on hardware and software, but that's a bit beyond the scope of this article ;)

Regarding attaching external SAS arrays

If I'm reading that SUN unit correctly, it's a JBOD unit by itself. Attach it to a SAS RAID controller with external ports and you can use hardware RAID on it. Or attach it to a stand alone SAS card and have up to 48 individual drives presented to the operating system. Either method will work. Whether or not the SAS RAID card can be configured for JBOD is up to the RAID card manufacturer, I've seen it go both ways over the years.

Regarding "4 (x4-wide) SAS host/uplink ports (48 Gb/sec bandwidth)"

This means that the unit has multiple SAS ports on it, and it can do link aggregation for increased bandwidth. To make full use of this, you'll need 4 free ports on the card you attach it to. These also can be used to attach two hosts to this unit, if you're of a mind.

The 'Expansion ports' on the spec are for attaching additional SAS shelves to the first unit. You'd attach your RAID card to the first unit, and then attach additional units to the first over those expansion ports. I think. Trough this you can get silly amounts of direct-attach-storage.

Regarding standard ports

Some of this varies from vendor to vendor, but in general 1U-2U servers these days do not ship with external storage connectors standard. The 4U servers may be different, but I don't play with those that often so I don't know first hand. To get the ability to use external storage, you'll need an adapter card of some kind. Whether that's a simple SAS adapter, or a smarter version of the built in RAID adapter is up to you.

sysadmin1138
  • 131,083
  • 18
  • 173
  • 296
2

I can only speak for Dells as I only have a lot of experience with them.

High end servers all have hot swap disks, and these slot into a backplane. Entry level servers can be bought without a hot swap drive bay and these normally do not have a backplane. Instead the drives are cabled straight to the motherboard like a consumer PC.

Some Dells have a basic disk controller built in, but if you're forking out quite a lot of money for a high end Dell server it's assumed you'll be buying at least a entry level RAID controller like a Perc6iR, and probably the more expensive but still not extortionate Perc6/i.

The performance of these controllers is truly awesome. They are well worth the money. While you can use software RAID the hardware RAID is faster, simpler and easier to manage. Bite the bullet and pay for a decent RAID controller.

To connect external disks you'd use a Perc6/e RAID controller or similar. This is a PCIe card with two SFF-8470 connectors, and you get the same connectors on your external drive enclosures. You just connect the enclosures to the Perc6/e card with the appropriate cable, and the Perc6/e then sees them is just the same way it would see internal disks.

JR

John Rennie
  • 7,756
  • 1
  • 22
  • 34