Im looking into the cost of putting a Cassandra cluster into a colo facility. Along these lines there would be 6-8 servers at the outset with expected growth over time. One option is just a series of Dell R320 (or similar). Another option would be blades or similarly built machines that share power.
Looking at the details of an 8 node system I see it has 4x1620 watt power supplies. This gives a total of 6480 watts. If I have a rack with 208V this means I'm pulling more than 30A at peak. So I've maxed out my 42U rack in 6U of space. I realize this is 'peak load' but it seems a bit extreme.
Am I misunderstanding how this calculation works? I get VA=W and I get that it won't pull this kind of load but 30A is a lot of current. I don't have the luxury of buying one and using a kill-a-watt to accurately measure it. The specs for the system don't make it sound like these are redundant but that's a tremendous amount of current.
Has anyone deployed blades or multi-node servers and measured the required current? I'd love to get a Dell M1000 but the prospect of trying to budget for 40A just makes me need to lie down.
EDIT If I use a kill-a-watt
to measure the input current for a system with n power supplies - do I sum them? Are they all pulling 1/n?