21

Please bear with me as being a bit of a newcomer at 19" rack-mounted equipment.

I've thought a fair bit lately about the best way of getting 4x or 6x of 2.5" hard drives into my rack and are currently really confused about would be the best (read economical) solution.

After scouting the market, I've found this type of disk array units that offers built in RAID and a lot of drive slots and a truckload of geek cred, but at a price that just isn't going to fit in my budget.

alt text I've also found these type of cute adapters that takes two 2.5" drives in one 3.5" slot, but I will obviously need a chassie with a lot of 3.5" spaces in order to make it work.

alt text

So what is the most economical way to house my harddrives in my rack?

Rob Moir
  • 31,664
  • 6
  • 58
  • 86
Industrial
  • 1,559
  • 5
  • 24
  • 37
  • Are you looking for strickly storage or combination storage/servers? Also what interconnect are you looking to use to attach these to the hosts? – 3dinfluence Jan 08 '11 at 15:37
  • @Industrial: Do you need a full file serving solution on just some more disks to connect to your existing servers? Do you need something with a built-in RAID controller or are you going to only attach more disks to a server's one? How do you plan to connect this storage to your existing equipment? There really are lots of solutions here, please clarify your needs. – Massimo Jan 08 '11 at 15:41
  • 3
    @Massimo: It sounds like he's just interested in storing them in the rack. I'd recommend cardboard boxes... they're very economical and ecologically friendly. – Evan Anderson Jan 08 '11 at 15:47
  • apart from being a cheap solution, what are you trying to achieve? give me a business goal – Nick Kavadias Jan 08 '11 at 15:56
  • Hi all - I am planning to connect them to raid cards in the future. Usage will be file storage (backup)... – Industrial Jan 08 '11 at 17:46
  • Change the question. 4-6 discs are not "a lot". They are very few for rackmount - a 1unit high case can handle 8 discs. I have 24 discs in 2 U, soon another 72 in 4. 4-6 is too small for most enterprise vendors. – TomTom Jan 08 '11 at 17:54
  • 1
    @Industrial: ok, but will those RAID cards be in a server? And how exactly do you plan to connect them to the disks? – Massimo Jan 08 '11 at 17:54
  • @Massimo, the plan is probably to get raid cards, put in a dedicated server connected with sata cables to the drives – Industrial Jan 09 '11 at 22:51
  • I had the same problem - I was surprised, how impossible is it to get a simple HDD storage without astronomical costs ($700 or more). Now I think the best option is to use the house of a dead server for the task. If it stays just above/below the real server in the rack, the 1m cable limit of the SATA is solved. – peterh May 20 '20 at 15:31

3 Answers3

34

It's easy to look at pictures of hard drive caddies and storage arrays but that isn't going to help. As I'm sure you know, it's not just about getting a large amount of disks and throwing them into a rack - you need to think about how they will be accessed, monitored, controlled, etc. I'm also a little confused - in your question title you talk about "many" hard drives and in the detail you talk about 4 drives - do you literally mean 4 drives, or do you mean 4 drive chassis of the sort in your picture?

The most "economical way of getting lots of disk into a rack mount" is difficult to answer because what that actually means changes depending on the problem you're trying to solve. You need to define what you're going to use them for, what sort of risks are acceptable to you and how you define "economical". And while you might have a tight budget, which is fine, you need to accept there will be some real costs here, either in time or money if not both.

What sort of problem are you trying to solve

In other words, what do you want to do with the disks, how will they connect to the thing(s) you want to use them with, etc. Different types of storage are suited to different types of job - broadly speaking you can divide "a bunch of disks in a rack" into 3 broad categories depending on what they are connected to (there are lots of other ways to group this and break it down, of course)

Direct Attached storage - DAS for short.

This is a dedicated storage array that plugs directly into an already existing server to expand the storage available on that server, usually via either SCSI, (more recently) SAS, or (typically at the lower end) SATA. This will give you a reasonably economical way of providing a lot of storage to one machine. That one machine might then act as a file server and publish shares on your network to contain files, and you can even find software to turn this hypothetical server into a NAS (see openfiler or FreeNAS for examples) or SAN (openfiler is an obvious example).

Network Attached Storage - NAS for short

A NAS is essentially a minimalistic server that is dedicated to providing shared storage on a network. Typically this will be an appliance with a highly tuned OS and file system, designed to publish fileshares on a network with reasonable performance and security, and not do a lot else (though many home/small office NASes do other tricks as well).

If you're trying to provide bulk "network" storage, perhaps centralised storage for office workers to store documents, or even for their workstations to be backed up to a central point this can be a good bet. You will probably find that a NAS might cost more than a DAS solution, but then you don't also have to provide a server and spend time configuring the server as a file server. You pays your money and you takes your choice. There are some cheap NAS devices out there (like this one) but once you start talking about rack mountable devices you're talking about "enterprise computing" and the prices and features start going up accordingly.

Storage Area Network - SAN for short

A SAN is a more specialised network file store, which is designed to allow its storage to be divided between several servers and viewed "logically" on each server as if it were a local direct attached/internal device.

SANs are connected to the servers using them by a "network" that is usually (but not always) dedicated to the SAN connections to ensure both good security, reliability and performance.

SAN infrastructure and disk typically ranges from "quite expensive" to "Is that really a price, or an international phone number" so with your worries about budget you probably see this outside of your price range - though depending on your requirements it may turn out this is what you need, in which case you may be able to set one up for "free" using the resources I suggest above.

Risk, and how you define "economical"

You mention a NAS that supports RAID as being out of your price range in your question, but you need to think about risk - only you can define what chances you are prepared to take with your data and how valuable it is, but you need to be aware that the more disks you have in a storage array, the greater the chances that one will fail and the greater the chances that another will fail while the first one has not yet been replaced and brought back online. There's a discussion about this here.

This bring us to "economical" - do you consider this to mean you want the cheapest possible solution, period (which will probably be a server with a lot of DAS boxes, configured in one giant RAID 5 array) and damn the problems and risks this might bring? Or do you consider "economical" to mean "good value for money" (which isn't always the same as 'cheapest'). I'm a lot more comfortable with that second definition myself.

Other considerations

If you want a rack full of hard drives, then you need to be aware that this will require a good power supply and will also generate a lot of heat which will need to be removed/cooled in order to keep the hard disks operating reliably, so air conditioning and careful planning of rack air-flow and power needs may be a requirement.

Jay Taylor
  • 138
  • 8
Rob Moir
  • 31,664
  • 6
  • 58
  • 86
8

If you are for raw space and don't really care about performance (as long as it's near 100MiB/s for streaming access and at least 100's of IOPS) then it's hard to beat backblaze pods.

Hubert Kario
  • 6,351
  • 6
  • 33
  • 65
  • VERY nice, but are they actually available for sale or that blog post was just the company's way of saying "look at how cool we are"? – Massimo Jan 08 '11 at 16:16
  • 3
    They don't sell the pods, but you can buy the enclosure from Protocase : http://www.protocase.com – petrus Jan 08 '11 at 16:27
  • You can buy the storage pod case from Protocase. The same company that built them for backblaze. From googling around it looks like in quantities of 1-4 they run about $880 each. – 3dinfluence Jan 08 '11 at 16:28
  • or SuperMicro.2 rack units high: 25 discs, 4 units high: 73 discs, SAS backplanes. – TomTom Jan 09 '11 at 14:45
  • @TomTom ...costing over $2k last time I looked – Hubert Kario Jan 09 '11 at 14:54
  • Backblade pods ARE HORRIBLE. They are painfully slow, rebuilding the arrays will take A WHOLE MONTH, losing ONE SINGLE PSU will result in the loss of ALL THE DATA STORED ON IT. That's... not that great; they're all "home user" parts, not designed to run 24/7. – Mircea Chirea Jan 09 '11 at 17:30
  • @Hubert: well, this is enterprise grade, serious stuff. I have a 24 disc one - very nice and satisfied (curerently loaded with 12 Velociraptors). You get waht you pay for. Racks is enterprise sapce, and 4-6 discs are just not something people ask for. – TomTom Jan 09 '11 at 18:45
  • @iconiK there are redundant PSUs avaiable, array rebuild depends on size of disks and configuration (you definitely shouldn't put all the disks in a single RAID6 array!), anyway, a month doesn't seem to be likely to rebuild those 4-6 arrays you'd build but I'd have to do the math. – Hubert Kario Jan 09 '11 at 19:24
  • @Hubert, you can't stick redundant PSUs in the Backblaze pod really. Besides, the BB pod has three RAID 5 arrays (RAID 50), each with a LOT of disks. It's just a horrible design for anything that needs any sort of reliability. And yes, a month is a pretty good guess; they're big arrays with 1.5TB drives; an unrecoverable read error is almost a given during rebuild. – Mircea Chirea Jan 09 '11 at 20:36
  • Chieftec has redundant ATX PSUs. Are you suggesting I can't use them? As for errors: yes, that's why you don't put all the drives in a single RAID set. Each of the proposed arrays at 90MiB/s would resync in about 60h, that's a bit over a week for all arrays. – Hubert Kario Jan 09 '11 at 21:03
  • @Hubert, no you can't with that design, because you need to remove the PSU casing from the chassis to remove the PSUs. Backblaze pod is just a horrible design when the data has any value. In their case, the data is not valuable, but storage is king. – Mircea Chirea Jan 20 '11 at 08:40
4

You can't really just "house my harddrives in my rack". Harddrives are built to operate inside PCs, and fx SATA cable lenghts are limited to ~1 meter.

Technologies originally targeted ad the enterprise such as SAS and Fibre Channel can have expanders, long cable runs, etc. But they're not what you would be likely to consider "economical".

One common way to get lots of storage on the cheap is dedicated servers built with custom enclosures such as this one from Supermicro. So you'd have a PC inside, connecting to all the harddrives via SATA or SAS RAID adaptor(s). And then you would connect the server to the LAN via a Windows on Linux server OS and your choice of protocol.

Another common way is to buy a NAS applicance from a reputable vendor. Look around, there are many examples. The NAS approach is arguably better because you have one vendor who supplies a ready-made, tested solution with support.

Note that when you have 'many' harddisk drives you really need to think about redundancy (RAID), because the occurrence of disk failures grows with the number of disks of course...

  • SATA II apart from faster speeds added expanders to the mix... – Hubert Kario Jan 08 '11 at 16:20
  • @Hubert Kario: You are correct, I had actually completely forgotten about that. But that's because I have never seen an actual SATA II expander solution in a datacenter. Are you aware of any do-it-yourself SATA II expanders that actually ship in volume, and reliably work? –  Jan 08 '11 at 16:24
  • +1 good practical stuff – Rob Moir Jan 08 '11 at 23:27
  • @Hubert, SATA II doesn't add expanders, it adds port multipliers, which are horrible. SAS expanders are far better. – Mircea Chirea Jan 09 '11 at 17:31
  • Both technologies make bus sharing possible (lowering available bandwidth), what SATA can't do is multipathing. Besides, you have to remember that SATA is more of home, not enterprise, technology. On the other hand, if you can put enough redundancy in the system it will work out OK (see Google). – Hubert Kario Jan 09 '11 at 19:12
  • I think the 1m limit is not really a problem - the problem is that many servers have only 1-3 hdd slots, while much more is required. Typically, the chassis would stay just below/above the server in the rack. On this reason, I believe the accepted answer is a NAA. – peterh May 20 '20 at 12:56