8

Aside from space, why use a rack mountable server over a tower server?

Or is space the only reason...

Tower servers are cheaper, more space for cooling, and more expandable... Why rack mount?

Soviero
  • 4,306
  • 7
  • 34
  • 59
  • So for example, for a small business (1-2 servers), tower servers are better? – Soviero Dec 14 '10 at 19:47
  • It's usually around the 4 or 5 server mark that you should be switching to rack; but ever business is different. Space != Cooling. – Chris S Dec 14 '10 at 19:57

4 Answers4

10

When you rack mount the servers you end up creating what are called hot and cold isles. The hot isle is the where all the servers vent their hot air into, and the cold isles are where you pump the chilled air from the AC into. In a server room it isn't about how many fans you can put inside the machine, it is about how much cold air can you get to the machine. Rack mount servers are configured to allow for the maximum amount of cold air to move through the machine as quickly as possible.

Rack mounted servers also make it much easier to get into the machines as you just slide the machine out on the rails do what ever needs to be done, then slide it back into place. When done correctly there is no need to unplug a single cable when doing this.

You mentioned space, but I'll talk a little about that. When towers they come in all shapes and sizes. If you start putting towers in a rack, you can probably fit 3-6 machines in a rack, depending on tall they all, and if you can put two on a shelf and have them fit. With rack mounted servers you can easily get 42 machines in a rack (assuming 1U servers and a 42U rack). That is a much more efficient use of space, and space costs money.

mrdenny
  • 27,074
  • 4
  • 40
  • 68
  • Small racked environments are likely to not employ hot/cold isles. Cheap rack servers don't have rails that they slide on. – Chris S Dec 14 '10 at 19:52
  • Really? Even the cheapest whitebox servers that I've bought all had rails so you could work on the machines. – mrdenny Dec 14 '10 at 19:56
  • 1
    I confirm what Chris says. Supermicro has a lot of no-rails server and they are a mess to rack as gravity quickly catches up with only two front screws. – Antoine Benkemoun Dec 14 '10 at 20:04
  • I've got an HP DL160 that doesn't have sliding rails; and several MSA SAN boxes (though admittedly those aren't servers) without them too. – Chris S Dec 14 '10 at 20:04
  • Surly the HP server has rails as an option. The MASs shouldn't have rails as the shouldn't be any need to get to the top of the unit as everything should load from the front or the back. Speaking of MSAs, if you happen to have an MSA 2000 G3 can you verify this? http://bit.ly/ebqwNT – mrdenny Dec 14 '10 at 21:09
  • Most servers I've dealt with don't have rails that you can work on the gear without pulling them from the rack. I've mostly worked with Supermicro, ASUS, Penguin, and Dell, and only the Dell have had that. Would be nice though. – Sean Reifschneider Dec 15 '10 at 08:00
  • @Sean they make life so much easier when you need to get into the server. Especially when the server has a bunch of cables coming out of the back. I had some VMware hosts that had 17 cables per server. With the rails and cable management arms working on the server was a piece of cake. Without the rails I might have killed myself. – mrdenny Dec 15 '10 at 09:37
  • @mrdenny, No doubt, I just haven't had servers that could do it. In my case it's not SO much of a big deal because most of our servers just have power, ethernet or two, and KVM, so not many cables to remove and replace. Though the other night I had to cut an Ethernet cable because I couldn't get it disconnected while in the rack. That one actually had to come out though, I needed the space in the rack to cable the redundant power to a switch. – Sean Reifschneider Dec 15 '10 at 21:22
2

Beyond space, there are at least a couple of reason why rack mount servers are desirable. First, it is easier to implement air circulation and cooling systems when you can control the airflow by closing the gaps between machines. This would not be easy to do with non-standardized case sizes. Second, some might find it helpful to be able to physically secure a group of machines via a locking rack.

There's probably more, but space would be the primary.

Joe
  • 823
  • 1
  • 7
  • 20
1

Space is the only reason, really - if you have a lot of servers, then the storage space it takes to keep them rapidly gets more expensive than anything else.

palmaceous
  • 73
  • 5
1

Space in a datacenter is extremely expensive, much more so than the servers, relatively spoken.

Sven
  • 97,248
  • 13
  • 177
  • 225
  • Mmn, no, that's not the general case. It's true that there are some old datacenters, which were built with assumptions about space/power/cooling constraints that don't fit with modern servers, where what you say may be true. But in most newer datacenters, space is actually the cheaper part -- and power is generally the limiting factor. For brand new datacenter builds, see http://perspectives.mvdirona.com/2010/09/18/OverallDataCenterCosts.aspx –  Feb 09 '11 at 12:53