0

I am looking after a few servers -- a small number and not worth calling a "data center". As the servers aren't located in a "proper" computer room, I am wondering whether or not the power outlets can support what we have.

I've looked through questions from others here and things have cleared up a lot. But I still have a couple of lingering questions.

Suppose a power outlet has a rating of 20 Amps. That means it can supply at most 20 A. A server has two power supplies (i.e., one is redundant) that has 10 A written on each. My understanding from other posts is that this still means that the server draws at most 10 A. The second power supply is really just for backup. This is despite the fact that the maximum current is written on each power supply.

If I plug a power bar into the aforementioned power outlet, does that mean that I can support at most two such servers?

Other questions here seem to imply "No". That's because I should really look at the actual power consumption (i.e., Watts = Voltage * Amperes). I can believe that, but isn't that unsafe and a bit subjective? The load can very and if you go over and plug 3, 4, ... 10 Amp servers, you're really betting that the load won't be high for the connected servers -- high enough to draw over 10 Amps.

I think why I'm puzzled is if this was a toaster, a heater, a hair dryer, etc., it's a lot easier to predict. Since there is the problem of server load, it seems I'm "forced" to either go over the rating on the power outlet or have an electrician install more power outlets.

Assuming what I've said above (cobbled together from others' questions) is correct, is going over the outlet's power rating the general practice? If so, then the million dollar question is how much over is "safe"? Or there is no way to know other than to measure the power consumption (i.e., # of Watts) during normal use and see how many Amps are being drawn by the users?

Thank you!

Ray

Ray
  • 168
  • 6
  • 2
    You can use a device such as a [Kill a Watt](http://www.p3international.com/products/p4400.html) to see the actual power draw at a given time. – Bert Mar 01 '15 at 17:09
  • The server PSU has a maximal power rating (like, 1000 Watts). The server will not consume that much power. Find out how much power will the server actually consume under maximum load (like, all cores at 100%) - depending on how many drivers and what cpu's and how much memory, it will be more likely in the say 3-400 Watts range. – Dan Mar 01 '15 at 17:15

3 Answers3

3

The reason computer power supplies are rated for a maximum power consumption and do not state an exact current they will draw (unlike your hairdryer) is quite simply because power supplies are generic components and a server configuration is pretty dynamic (only CPU socket occupied or all, completely filled with power hungry 15k spinning disks or drive-less and with ESXi booting from an USB flash drive, no expansion cards or multiple GPU cards etc.) making that too difficult for the vendor.

So while the power supply supports all your components combined drawing up to 10 A before blowing its fuse, most likely the system will draw significantly less. The actual power consumption can be calculated or is ideally measured.

Purchase of specialty device to measure power consumption may not be a requirement: HP servers for instance already record such data in their ILO device as detailed in this answer.

Something to keep in mind: most servers reach their peak power consumption at boot time, when everything is starting up and power governors haven't kicked in yet. Also a common default server configuration is to automatically resume operation after a power failure. That makes for a power hungry combination after a restoration of power after an outage.

A general precaution and strongly recommended is to install an UPS system between your servers and the electrical outlet.

HBruijn
  • 72,524
  • 21
  • 127
  • 192
  • Thanks for this! I see your point in the first paragraph -- that was what I was confused about. Sounds like the purpose of the information on the power supply is to know how much can be plugged into the power supply; and not what power outlets the power supply can be plugged into? I guess that's something I just need to accept... I will see if the servers we use has any data recorded -- it isn't an HP, but might have something similar. Thanks for the warning about booting and UPS! – Ray Mar 02 '15 at 06:01
1

I would get a Wattmeter and measure the actual power usage under max load. Modern wattmeters come with an array of functions, providing peak amp, average amp, total energy usage, etc.

E.g. My machine has a 800W power supply, but it doesn't draw that much. I bought a meter when I decided to get a UPS. It turns out, powering my machine at idle load + one monitor is only 96 watts. Under load, it has never exceeded 250.

Of course, assuming that your fuse will melt at 10A, you don't want your electric loads to be anywhere close to that. I'd use 60% as a reasonable safety margin.

kevin
  • 226
  • 2
  • 3
  • 13
  • So, you're suggesting 60% of the power supply's rating (in Watts)? That's a good start until I find a reasonable way (i.e., with a Wattmeter or via software) to measure it. Thanks for this! – Ray Mar 02 '15 at 06:13
  • Nope, I mean, first measure each device, then you add your devices until it reaches 6A. – kevin Mar 02 '15 at 06:36
  • Ah! I see! 60% of the current of the power outlet. Makes more sense! – Ray Mar 02 '15 at 09:50
1

Computers draw very different amounts of power depending on whether they are idle or are under load. You could estimate the maximum power used by each of the components (CPU, motherboard, hard drives, graphics card, etc), but it's probably best just to measure it yourself.

weiyin
  • 195
  • 1
  • 1
  • 7