When I was working in our server room, I noticed that it was very cold.
I know that the server room has to be cold to offset the heat of the servers, but perhaps it is TOO cold.
What is an appropriate temperature to keep our server room at?
When I was working in our server room, I noticed that it was very cold.
I know that the server room has to be cold to offset the heat of the servers, but perhaps it is TOO cold.
What is an appropriate temperature to keep our server room at?
Recommendations on server room temperature vary greatly.
This guide says that:
General recommendations suggest that you should not go below 10°C (50°F) or above 28°C (82°F). Although this seems a wide range these are the extremes and it is far more common to keep the ambient temperature around 20-21°C (68-71°F). For a variety of reasons this can sometimes be a tall order.
This discussion on Slashdot has a variety of answers but most of them within the range quoted above.
Update: As others have commented below Google recommends 26.7°C (80°F) for data centres.
Also the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) has recently updated their recommended tempature range to be from 18°C-27°C (64.4°F-80.6°F).
However this article agains highlights that there is still no consensus on the subject. As mentioned in the article I would highlight that:
...nudging the thermostat higher may also leave less time to recover from a cooling failure, and is only appropriate for companies with a strong understanding of the cooling conditions in their facility.
IMO most companies would not have such a strong understanding of cooling conditions and thus it would be safer in a small business environment to be running the rooms a little cooler.
NB: It is important to note there are a lot more factors to consider in a server/data room than just the temperature, air flow & humidity for example are also important concerns.
Google's datacenter best practices recommends 80 degrees:
Raising the cold aisle temperature will reduce facility energy use. Don't try to run your cold aisle at 70F; set the temperature at 80F or higher -- virtually all equipment manufacturers allow this.
We run at 72, but then again I don't exactly trust our own room was designed with airflow in mind.
As others have said, somewhere in the low 70's F is good. However, it's even more critical to make sure the rack and the hardware in it can "breathe". If hot air is trapped in the rack - or in a server chassis - then the low ambient temperature won't really do any good.
My server room is set to 69 degrees. We have one air conditioning unit that services that room and it runs 24/7 to keep the room at 69 degrees year round.
All server rooms I've seen usually are between 20°C and 25°C but from experience I've noticed hardware is more sensitive to variations more than a given temperature. I've often seen hardware fail after a bump of, say 4-5°C, even if it is from 20 to 25.
So stability is a key, as well as air flow of course.
18°C (65°F). Its a bit colder than it has to be, but if something fails it gives us a few precious extra minutes to react before it gets uncomforably hot.
https://en.wikipedia.org/wiki/Data_center_environmental_control contains an interesting overview of vendor datacenter temperature recommendations:
We like ours to be between 19C and 23C with alarms at 28C - we're a 98% HP blade place so in the event of ours getting to that alarm level certain servers/enclosures 'turn their wick down' to lower overall power draw (it's called thermalogic).
Remember that Google's advice of 80°F is including virtually no cost to shutting down a datacenter when it overheats due to unexpected load or air conditioning failure. They also include a greater control of airflow over critical components.
I'm in the Navy. On our ship we kept our rooms at less than 65 degrees Fahrenheit. We'd be in the middle of the Red Sea and it would be 120 degrees outside. We'd walk out of the space (at 62 degrees) with a sweater, gloves, a coat and a peacoat and everyone would look at us like we were crazy. Everyone else was striped down to their tee shirts and sweating like crazy.
The problem is, our cooling system does not remove the humidity very well. So, if it's humid outside and the temp goes up to 70 degrees or more it starts to get sticky in the space.
In my experience, it depends on the equipment. I know one company had three server rooms. One was kept about 80°F, the other, with fewer servers, was at about 72°F, and the third, with only two servers, was at about 65°F. Never had a problem in the time I was there, but, like others have said, 75°F or slightly below is probably best, since it gives a little elbow room if the AC goes out.
Anything over 30c at air inlets we've found causes temperature warnings (which we monitor), In general that means an ambiant room temperature below 25c.
In Australia it seems Emerson (Liebert) decided that 21c would be the standard and every room I've been in here has been set the same.
If your room is big enough look at hot-aise/cold-aise, blanking tiles, and similar, they can really improve the cooling efficency.
There is no hard and fast answer to this question. It depends a lot on the type of computers and the design of the datacenter.
One of the key points made in that slashdot discussion is that the real difficulty is dealing with hotspots and airflow.
A cool ambient temperature in the room is an easy way to cool down the thousands of hotspots in the room. But it's inefficient: The datacenter cooling the whole room when all that's really needed is to cool the hot parts of the servers (cpus, memory, storage, power supplies, etc). Many colocation facilities have customers with disparate types of hardware (115V or 208v, good designs, bad designs).
A datacenter also needs to maintain enough cool air to provide a buffer in the event of a power outage. A buffer of cool air means that UPS and generators work less. Imagine that a datacenter looses 100% of power (All nodes go down, cooling also goes down) all at once. The nodes may not have power, but the hardware will take time to cool and are still radiating heat for a while, and this can make a datacenter very hot.
Organizations with warm server rooms often use closed cabinets and fancy ductwork to direct cool air directly over the hot nodes. A specialized power infrastructure can help to reduce the heat from power conversion. This keeps the nodes cool enough, even though the ambient temperature in the server room is warm. These organizations usually have tight control over most aspects of their hardware, and can custom design the entire datacenter for their needs. They can make investments and reap the benefits.
First: where are you measuring the room temperature ? ;-) What is the rack cold side temp at ? what is the rack hot side at? what is the A/C intake temp at ?
It really depends on the devices. (feel the heat that comes out of an xbox) Thermal management is a science, Look Honda ( and many others) are starting to water cool exhaust ;-) ...
I will tell you this, of the datacetenters I've been in:
Equnix keeps there DC's cold (makes for good 'experience' for when exec's go on tour)
RW keeps them warm/hot 80f or so with pretty chilly air coming out of the floor(better room/vault design and higher quality system design overall)
Coresite (atleast where I have been) is closer to RW but it depends on the CR and customers. Some CR's at 80 some at 70. (cold isles always cold)
The temp you're really looking to monitor is the internal server temperature - not the external ambient temperature. Watch your server temperature (all servers have internal thermometers that report through the server interface) and adjust your ambient temperature accordingly. This will account for air flow, etc. Having a back-up cooling plan is a good idea, even portable A/C units can be lifesavers if there is a primary cooling system failure.
I have found 19-21 a good temperature. Google runs its server rooms at a slightly higher temp than you would epxect as they found the slightly higher failure rate better than paying the extra in cooling.