-1

I have several servers connected to a UPS. I would like to try and get those servers to draw the most power they are likely to as a load test of the UPS. What is the best way to cause a server or PC to draw its maximum power?

One machine is running Windows Server 2012r2. Another is running Ubuntu 16.04. I can spin up a virtual machine of one operating system on the other if I need to to perform the tests.

On GNU Linux I have used the GNU stress application before. That offers quite a few handy options for loading the system. Some guidance on which of these are most likely to draw the most juice might be enough to answer this question.

I do have a power meter I can put inline. I could run a few hours of experiments to see where I get but if someone else can already answer this or provide guidance it would save me some time.

I do have the plate values of the servers so I know their rated maximum power and I know the UPS could support that. What I want to do is model how much power they will probably draw under heavy load so I can work out how long the UPS might last rather than how long I can guarantee it lasting. In the old days these machines might have had a fixed job and I could just run that job flat out. Nowdays VMs, containers and private clouds mean that they could be doing almost anything so I just have to push the hardware to its limits.

TafT
  • 223
  • 2
  • 12

1 Answers1

0

run SETI (search seti@home) this uses 100% of cpu and grahpiocs cards where applicable