In a situation where we had our own datacenter and space wasn't a problem, I used to skip a U (with a spacer to block airflow) between logical areas: web servers has one section, database, domain controller, e-mail, and file-server had another, and firewalls and routers had another. Switches and patch panels for outlying desktops were in their own rack.
I can remember exactly one occasion where I skipped a U for cooling reasons. This was an A/V cable TV looping solution in a high school, where there were three units that were each responsible for serving the cable TV system to one section of the building. After the top unit had to be replaced for the third time in two years due to overheating, I performed some "surgery" on the rack to make mounting holes so I could leave 1/2U of space between each of the three units (for a total of 1 U total space).
This did solve the problem. Needless to say this was thoroughly documented, and for extra good measure I taped a sheet to the top of one them in gaps explaining why things were the way they where.
There are two lessons here:
- Leaving a gap for cooling is only done in exception circumstances.
- Use a reputable case or server vendor. Be careful of buying equipment that tries to pack 2U worth of heat into 1U worth of space. This will be tempting, because the 1U system may appear to be much cheaper. And be careful of buying an off-brand case that hasn't adequately accounted for air-flow.