56

Computers mainly need three voltages to work : +12V, +5V and +3,3V, all of them are DC.

Why can't we just have a few (for redundancy) big power supply providing these three voltages to the entire datacenter, and servers directly using it ?

That would be more efficient since converting power always has losses, it's more efficient to do it one single time than do it each time in each server's PSU. Also it'll be better for UPSes since they can use 12V batteries to directly power the entire 12V grid of the datacenter instead of transforming the 12V DC into 120/240 AC which is quite inefficient.

  • 3
    So basically have one point of failure? :/ – canadmos May 05 '14 at 01:22
  • 2
    @canadmos maybe not a single, there can be several PSUs, just not as many as one for each server. –  May 05 '14 at 01:25
  • I believe it's for the same reason one needs 5 different power adapters for 5 different devices at home and the same reason we don't have 5V or 12V supplies at home: Because we're not there yet. – V13 May 05 '14 at 01:30
  • 8
    Have you seen a blade server chassis? That is an example of moving towards this kind of system, maybe. – Rob Moir May 05 '14 at 11:58
  • 1
    What is the point of this question? It's clearly not the way the majority of systems are designed for historical, cost and inertia reasons. This really isn't answerable in this form. – ewwhite May 05 '14 at 12:31
  • I think @ewwhite hit the nail on the head. Q: Why aren't we there yet? A: Cost and inertia. – joeqwerty May 05 '14 at 14:06
  • 12
    As an engineer, the main question is why people run their AC at 50 Hz or 60 Hz. PSU's are so big because the frequency is so low. But in a DC environment behind UPS'es, you could pick any frequency. At 500 Hz, the PSU's would be smaller and more efficient. (Basically, your caps can be 10x smaller because each period now lasts 2 ms instead of 20 ms) – MSalters May 05 '14 at 14:51
  • 1
    Assuming we ever get an all DC solution, there's a good chance that 3.3v won't be part of it. The existence of molex-sata adapters has precluded any widespread use of 3.3v by disk drives; and the last major component on mobos that operated at 3.3v (legacy PCI) is increasingly unused on modern systems. – Dan Is Fiddling By Firelight May 05 '14 at 15:22
  • My DELL representative asked for my ideal solution and we have a power supply that has instantaneous outages, so I'd like a desktop machine with a laptop power supply ;-) – Mark Hurd May 06 '14 at 05:31
  • 1
    @MSalters: When I visited a vintage computer museum, I was told that some of the supercomputers exhibited there were running on 400Hz for that very reason. One Cray in particular, if I remember correctly. They had some motor-generator device centrally installed to do the frequency change for the whole center, and the inertia of its rotor also served as a short-term UPS. I guess radio emission from the power supply lines might be a problem these days, though. – MvG May 06 '14 at 19:34
  • 2
    @MvG For the modern equivalent of that you can get [interruptible power via a flywheel](http://www.cat.com/en_US/power-systems/electric-power-generation/ups-flywheel.html). There's some advantages to a flywheel, especially in areas subject to frequent brownouts/power drops. Switching over to batteries is REALLY HARD on the batteries but if a flywheel can sustain the load for a short interval you can drastically extend the battery life. – MikeyB May 08 '14 at 05:22
  • 1
    MikeyB, that is **properly brilliant**. I had no idea - thank you for mentioning it! – MadHatter May 08 '14 at 10:59
  • [This article](http://microsoft-news.com/microsoft-successfully-demonstrates-fuel-cell-powered-data-center-points-to-energy-efficient-future/) focuses on use of on-site fuel cells in place of grid power but they explicitly mention the use of DC as one of the power-saving advantages of the concept so it would seem that you are right and coming. –  May 06 '14 at 15:21

6 Answers6

60

What'cha talking 'bout Willis? You can get 48V PSUs for most servers today.

Running 12V DC over medium/long distance suffers from Voltage Drop, whereas 120V AC doesn't have this problem¹. Big losses there. Run high voltage AC to the rack, convert it there.

The problem with 12V over long distance is you need higher amperage to transmit the same amount of power and higher amperage is less efficient and requires larger conductors.

The Open Compute Open Rack design uses 12V rails inside a rack to distribute power to components.

Also large UPSes don't turn 12V DC into 120V AC - they typically use 10 or 20 batteries hooked in series (and then parallel banks of those) to provide 120V or 240V DC and then invert that into AC.

So yes, we're there already for custom installations but there's a fair bit of an overhead to get going and commodity hardware generally doesn't support that.

Non sequitor: measuring is difficult.

1: I lie, it does, but less than DC.

MikeyB
  • 38,725
  • 10
  • 102
  • 186
  • A single battery is 1.5V (NiCd, NiMH) or 3.7V (LiIon, LiPol, LiPol has some other variants too), so it's more batteries than 10 for 120V. – Jan Hudec May 06 '14 at 14:45
  • 6
    A single *cell* tends to have a low voltage of 1.5V or 3.7V but a *battery* is often multiple cells. What's in your car? – MikeyB May 06 '14 at 15:21
  • True. But than all that's in a series would be simply a (single) battery. So the number is somewhat arbitrary. – Jan Hudec May 06 '14 at 16:15
  • 2
    this dude knows what he's talking about – Michael Martinez May 06 '14 at 22:52
  • 3
    Everything used (19th C) to run on DC (Edison's first power plant was DC). This involved building lots of tiny power plants everywhere because of voltage drop. AC was invented to prevent this issue. Off topic but it's basically the same issue you describe above. – Liam May 07 '14 at 09:57
  • 2
    Just a clarification: voltage drop is not lower with 120AC supply because the voltage is AC, but because increasing the voltage through a transformer lowers the current (and vice versa). A theoretical 120 DC line would also have 10x lower voltage drop. – Groo May 09 '14 at 11:32
  • The comment about AC was invented to prevent voltage drop is plain wrong. There were 2 competing systems. DC & AC. AC has the ability to be stepped up/down with a transformer. It's not that simple with DC. But DC doesn't suffer from Skin Effect. So AC has losses that DC does not. That's why we have some cables that are DC. http://en.wikipedia.org/wiki/Skin_effect. Anyway, AC won out over DC. – hookenz Jul 10 '14 at 20:17
  • What about having a single/double power supply (12V) per rack? that would already allows cost reduction and better supply right? – Adrian Maire Jan 02 '20 at 20:07
  • using higher voltage to transmit over long distance is double benefit. less voltage = more likely your load will accept it, but also less losses. example : 1200 W on 12V, 100A. let's assume you get a voltage drop of 2V with your cabling, that is 16.7% loss and hopefully, your load accept to work on 10V. Now, at 120V, 10A, your voltage drop should be about 0.2V or 0.167% loss, your load is almost certain to work at 119.8V. – Memes Apr 07 '22 at 11:26
18

It's not necessarily more efficient as you increase the I^2R losses. Reduce the voltage and you have to increase current in proportion but the resistive loss (not to mention the voltage drop) of power cables increases in proportion to the square of the current. Thus you need massive, thick cables too, using more copper.

Telcos use typically -48V so they still need power supplies in servers - inverters - to make the DC level conversion which is a conversion to AC then back again. The cables are much thicker.

So it's not necessarily a great idea to run everything on DC for efficiency.

xcxc
  • 383
  • 3
  • 8
  • 1
    A spanner has a much lower resistance than a human. – user253751 May 05 '14 at 04:01
  • 9
    "Volts jolt, but mills kill" is a bit misleading. Mills kill, but without enough volts, you'll never get a dangerous level of mills. Lick a 12V busbar and your tongue will sting, but you'll survive. Lick 240V, and you'll be in the hospital. – Ian Howson May 05 '14 at 06:30
  • 1
    Yes, you are right. Then there was the guy with nipple piercings who decided to test his internal resistance with an AVO... It doesn't even take 12V to kill when the conditions are right. – xcxc May 05 '14 at 07:05
  • On the topic of large currents, there's also the good ol' giant-flying-cable-of-death. – Bob May 05 '14 at 11:12
  • Mmmmm 9V batteries taste like sour candy. :D – MikeyB May 05 '14 at 15:44
  • The last paragraph seems out of place in this answer as well as here on the site. Who on Earth runs any sort of live voltage on open bus bars in a commodity server room!? Eletrical cabling has (multiple, even) isolation covers for a reason. Same with the way outlets are designed. Even 400 V fused to 16 A is perfectly safe to be around as long as it's tucked away inside cabling; it only gets dangerous once you have open access to it. (And before you ask, that's a normal mains feed supply for single-dwelling houses here in Sweden: three-phase 16 A AC, where single-phase is 240 V AC to ground.) – user May 06 '14 at 07:43
  • I concede it is not in keeping with rest of the answer, but wanted to highlight the dangers of high current low voltage. I know of the whole battery backup of an exchange getting taken out cos a spanner got dropped in a rack. – xcxc May 06 '14 at 07:51
  • @Michael. I was not suggesting that line/live AC voltage is run on busbars. – xcxc May 06 '14 at 08:05
  • 1
    @xcxc **live** voltage, not **line** voltage. – user May 06 '14 at 08:06
  • Yes - exactly - live :o - line :| – xcxc May 06 '14 at 09:16
  • @xcxc 12 V is live voltage. So is 400 V. 120/240 V AC is what I'd call line voltage; I wouldn't call 12 V line voltage. – user May 06 '14 at 09:21
  • The answer explicitly states *"I'd rather work around proper 240V mains cabling than open bus bars carrying 300A at 12 V."* My initial comment states in part *"Who on Earth runs any sort of live voltage on open bus bars in a commodity server room!?"*. If that's not what you meant to write, you should [edit] your answer to clarify. – user May 06 '14 at 09:35
  • let us [continue this discussion in chat](http://chat.stackexchange.com/rooms/14348/discussion-between-xcxc-and-michael-kjorling) – xcxc May 06 '14 at 10:03
  • edited to reflect the comments! – xcxc May 06 '14 at 11:27
11

Telcos have used DC in their central offices nearly exclusively, historically. In what seems to be a recurring pattern in computing, I'd argue that the IT industry moving to DC and, effectively, re-inventing the "wheel" that telcos already invented years ago is just par for the course.

The last few years have seen various articles talking about using DC power to make datacenters more efficient. I know that Facebook and Google (as referenced in that last link) are both big DC power users. I think it's just a matter of time before commodity hosting moves that direction, too.

Given the entrenched nature of AC power, though, it's going to take time.

Evan Anderson
  • 141,071
  • 19
  • 191
  • 328
6

As pointed out above, high current = high losses and thick cables.

Another prohibiting factor is that high current leads to a fire risk; remember that 100A is sufficient to perform arc-welding.

smirkingman
  • 161
  • 3
3

Basically the reason for using higher voltage AC is that we want to minimize power loss and make savings.

  1. P=UI, means power (W) is voltage (V) multiplied by current (A). You need some power for a HW. You have choice for the voltage, but the current will varies accordingly. This is true both for DC and AC. This leads to a first problem and its solution.

  2. Losses are proportional to current and resistance (U=RI). The more current, the more loss in the form of heat. So you need to favor higher voltage to minimize current and losses. But if you need 3 V for the HW and choose 100 V for the power supply, then you need to transform 100 V to 3 V at a point close to the HW input. This leads to a second problem and its solution.

  3. It is (actually it was) difficult to transform DC voltages, specially without too much losses. We need to use active and expensive switched-mode power supplies. In contrary it is easy to change AC voltages using a transformer (two simple static coils, using magnetic field).

  4. Conclusion based on previous choices: it is better to use higher voltage, which then must be AC to allow easy voltage conversion.

Engineers will compare cost electrical losses / failures and cost of voltage conversion for a specific problem, and then see which is cheaper. Add to this impact of failures, etc.

Today we start to see voltage converters for DC that are effective and less expensive. So best solutions may change in the future.

mins
  • 131
  • 3
2

It likely boils down to money. 120VAC power supplies are readily available by the truckload, the market for high-capacity smooth 12/5/3.3VDC supplies is rather small: there are far more single computers out there than datacenters. As mentioned in other answers, it's unlikely that any datacenter will put 12v in the wall plugs and the converter in the basement - more likely the opposite: plenty of commercial buildings use 480v for primary lighting as they can run many more fixtures on one circuit. Running 240VAC to the racks makes more sense than 12VDC, but I expect the future will see two large PSUs in the top of each rack and 4-pin power plugs for each server within that rack.

paul
  • 49
  • 1
  • 1
    Most simple (single socket, small number of disk drive, no discrete GPU - for compute) servers could be powered off a picoPSU (small board that plugs into an 24 pin ATX connector that takes 12V and produces a few amps of 3.3/5V power for misc components) like is used in a number of DIY minibox PCs. http://www.mini-box.com/DC-DC – Dan Is Fiddling By Firelight May 05 '14 at 15:19