74

It seems like there's a lot of disagreement in mindsets when it comes to installing rackmount servers. There have been threads discussing cable arms and other rackmount accessories, but I'm curious:

Do you leave an empty rack unit between your servers when you install them? Why or why not? Do you have any empirical evidence to support your ideas? Is anyone aware of a study which proves conclusively whether one is better or not?

Matt Simmons
  • 20,218
  • 10
  • 67
  • 114

18 Answers18

83

If your servers use front to back flow-through cooling, as most rack mounted servers do, leaving gaps can actually hurt cooling. You don't want the cold air to have any way to get to the hot aisle except through the server itself. If you need to leave gaps (for power concerns, floor weight issues, etc) you should use blanking panels so air can't pass between the servers.

Evan Anderson
  • 141,071
  • 19
  • 191
  • 328
jj33
  • 11,038
  • 1
  • 36
  • 50
  • 6
    Yes, if you leave a gap, you need to fill it with a panel to prevent that. – Thomas Jun 18 '09 at 11:25
  • 2
    =1. rack mount servers and racks are designed for air flow with all panels and bezels on and all u's filled. much like the air flow in a pc is designed for having all the covers on. circumventing the design by leaving gaps and\or removing panels and covers is likely to do more harm than good. – joeqwerty Nov 25 '09 at 22:17
24

I have never skipped rack units between rackmount devices in a cabinet. If a manufacturer instructed me to skip U's between devices I would, but I've never seen such a recommendation.

I would expect that any device designed for rack mounting would exhaust its heat through either the front or rear panels. Some heat is going to be conducted through the rails and top and bottom of the chassis, but I would expect that to be very small in comparison to the radiation from the front and rear.

Evan Anderson
  • 141,071
  • 19
  • 191
  • 328
  • 7
    In fact, if you are skipping rack units, you need to use covers between each server otherwise you will get air mixing between your high and cold aisles. – Doug Luxem Jun 18 '09 at 02:33
24

In our data center we do not leave gaps. We have cool air coming up from the floor and gaps cause airflow problems. If we do have a gap for some reason we cover it with a blank plate. Adding blank plates immediately made the tops of our cold aisles colder and our hot aisles hotter.

I don't think I have the data or graphs anymore but the difference was very clear as soon as we started making changes. Servers at the tops of the racks stopped overheating. We stopped cooking power supplies (which we were doing at a rate of about 1/week). I know the changes were started after our data center manager came back from a Sun green data center expo, where he sat in some seminars about cooling and the like. Prior to this we had been using gaps and partially filled racks and perforated tiles in the floor in front and behind the racks.

Even with the management arms in place eliminating gaps has worked out better. All our server internal temperatures everywhere in the room are now well within spec. This was not the case before we standardized our cable management and eliminated the gaps, and corrected our floor tile placement. We'd like to do more to direct the hot air back to the CRAC units, but we can't get funding yet.

squillman
  • 37,618
  • 10
  • 90
  • 145
Laura Thomas
  • 2,825
  • 1
  • 26
  • 24
9

I don't skip Us. We rent and Us cost money.

No reason to for heat these days. All the cool air comes in the front, and out the back. There's no vent holes in the tops any more.

mrdenny
  • 27,074
  • 4
  • 40
  • 68
  • 1
    Re: "Us cost money." I use to think space was a major factor in datacenter pricing. Then I got a job in the hosting sector, turns out most costs in a Colo environment are from power circuits & cross connects. – JamesBarnett Jan 29 '13 at 03:36
6

Google is not leaving U between servers, and i guess they are concerned by heat management. Always interesting to watch how big players do the job. Here is a video of one of their datacenter: http://www.youtube.com/watch?v=zRwPSFpLX8I&feature=player_embedded

Go directly to 4:21 to see their servers.

Mathieu Chateau
  • 3,175
  • 15
  • 10
5

We have 3 1/2 racks worth of cluster nodes and their storage in a colocation facility. The only places we've skipped U's is where we need to route network cabling to the central rack where the core cluster switch is located. We can afford to do so space wise since the racks are already maxed out in terms of power, so it wouldn't be possible to cram more nodes in to them :)

These machines run 24/7 at 100% CPU, and some of them have up to 16 cores in a 1U box (4x quad core Xeons) and I've yet to see any negative effects of not leaving spaces between most of them.

So long as your equipment has a well designed air path I don't see why it would matter.

Kamil Kisiel
  • 11,946
  • 7
  • 46
  • 68
5

Don't leave space if you have cool air coming from the floor and also use blanks in unused u space. If you just have a low-tech cooling system using a standard a/c unit it is best to leave gaps to minimize hot spots when you have hot servers clumped together.

  • 4
    If your servers use front-to-back fan cooling its not wise at all to leave gaps, it will hurt the airflow. – pauska Jun 18 '09 at 10:52
5

I have large gaps above my UPS (for installing a second battery in the future) and above my tape library (if I need another one). Other than that I dont have gaps, and I use panels to fill up empty spaces to preserve airflow.

pauska
  • 19,532
  • 4
  • 55
  • 75
3

I wouldn't leave gaps between servers, but I will for things like LAN switches - this allows me to put some 1U cable management bars above and below... but it's definitely not done for cooling.

Mitch Miller
  • 575
  • 3
  • 13
3

Every third, but that's due to management arms and the need to work around them rather than heat. The fact that those servers each have 6 Cat5 cables going to them doesn't help. We do make heavy use of blanking panels, and air-dams on top of the racks to prevent recirculation from the hot-aisle.

Also, one thing we have no lack of in our data-center is space. It was designed for expansion back when 7-10U servers were standard. Now that we've gone with rack-dense ESX clusters it is a ghost town in there.

sysadmin1138
  • 131,083
  • 18
  • 173
  • 296
  • OK, 6 cables, let's see...2 management interfaces, 2 iscsi interfaces and 2....dedicated to the cluster manager? – Matt Simmons Jun 18 '09 at 00:50
  • Don't use management arms and you don't have to skip units. :) – Doug Luxem Jun 18 '09 at 01:09
  • 1
    6 cables: 1x HP iLO card, 1x ESX mgmt LUN, 4x VM Luns. Also, fibers for the SAN. We haven't gone iSCSI yet. If we were willing to fully undress the servers before pulling them out, we'd definitely go w/o the arms. – sysadmin1138 Jun 18 '09 at 01:54
  • 1
    Having seen your data center I have to wonder if you set things up in a traditional hot aisle cold aisle setup rather than scattered around your giant room if you'd get better thermal performance. – Laura Thomas Jun 18 '09 at 03:57
2

No gaps, except where we've taken a server or something else out and not bothered to re-arrange. I think we're a bit smaller than many people here, with 2 racks that only have about 15 servers plus a few tape drives and switches and UPSes.

Ward - Reinstate Monica
  • 12,788
  • 28
  • 44
  • 59
2

No gaps other than when planning for expanding san systems or things like that. We prefer to put new cabinets close to the actual controllers.

If you have proper cooling, leaving gaps will not be beneficial unless the server is poorly constructed.

chankster
  • 1,324
  • 7
  • 9
2

I get the impression (perhaps wrongly) that it is a more popular practice in some telecoms environments where hot/cold aisles aren't so widely used.

It's not suited to a high density and well run datacentre though.

Dan Carley
  • 25,189
  • 5
  • 52
  • 70
1

I usually leave a blank RU after around 5RU of servers (ie 5x1ru or 1x2ru + 1x3ru) and that would be dependent on the cooling setup in the data centre your in. If you have cooling done in front of the rack (ie a grate in front of the rack) the idea is that the cool air is pushed up from the floor and your servers suck the cool air through them. in this circumstance you would typically get better cooling by not leaving blank slots (ie use a blank RU cover. But if you have cooling done through the floor panel in your rack from my experience you get more efficient cooling by breaking up servers from being piled on top of each other for the entire rack

Brendan
  • 914
  • 6
  • 5
1

In a situation where we had our own datacenter and space wasn't a problem, I used to skip a U (with a spacer to block airflow) between logical areas: web servers has one section, database, domain controller, e-mail, and file-server had another, and firewalls and routers had another. Switches and patch panels for outlying desktops were in their own rack.

I can remember exactly one occasion where I skipped a U for cooling reasons. This was an A/V cable TV looping solution in a high school, where there were three units that were each responsible for serving the cable TV system to one section of the building. After the top unit had to be replaced for the third time in two years due to overheating, I performed some "surgery" on the rack to make mounting holes so I could leave 1/2U of space between each of the three units (for a total of 1 U total space).

This did solve the problem. Needless to say this was thoroughly documented, and for extra good measure I taped a sheet to the top of one them in gaps explaining why things were the way they where.

There are two lessons here:

  1. Leaving a gap for cooling is only done in exception circumstances.
  2. Use a reputable case or server vendor. Be careful of buying equipment that tries to pack 2U worth of heat into 1U worth of space. This will be tempting, because the 1U system may appear to be much cheaper. And be careful of buying an off-brand case that hasn't adequately accounted for air-flow.
Joel Coel
  • 12,910
  • 13
  • 61
  • 99
0

I have them stacked in the one rack I have. Never had any problems with it, so I never had any reason to space them out. I would imagine the biggest reason people would space them out is heat.

0

Leaving gaps between servers can affect cooling. Many data centres operate suites on a 'hot aisle' 'cold aisle' basis.

If you leave gaps between servers then you can affect efficient airflow and cooling.

This article may be of interest:

Alternating Cold and Hot Aisles Provides More Reliable Cooling for Server Farms

Kev
  • 7,777
  • 17
  • 78
  • 108
0

I've left 1/3 of a RU between two switches before, mostly because they were 1 1/3 RU high each.

Had I put them hard-together then the gear higher up would have been out of position by 1/3 of a U.

Other than that, I've never left space purely for cooling, only for future growth.

Criggie
  • 2,219
  • 13
  • 25