Why aren't CPUs bigger?

21

CPUs are relatively small, and engineers are constantly trying to make them smaller and get more transistors in the same surface.

Why aren't CPUs bigger? If an approximately 260mm2 die can hold 758 million transistors (AMD Phenom II x4 955). Then a 520mm2 should be able to hold double the amount of transistors and technically double the clock speed or cores. Why isn't this done?

Simon Verbeke

Posted 2011-11-30T15:41:59.017

Reputation: 3 553

Question was closed 2011-11-30T23:22:49.853

4I don't know all the details, but basically the closer the transistors etc are together on the chip the more efficient it is. So quadrupling the area would make the chip slower. – ChrisF – 2011-11-30T15:46:57.140

1Plus, especially considering the current state of applications, modern day CPUs spend an awful lot of time doing nothing. They twiddle their thumbs while us, the users, figure out what we want to do. – surfasb – 2011-11-30T16:07:36.497

1@ChrisF You confuse the impact of die shrinking (speed gain as a result of reduced capacities) with reduced transistor numbers. Ask yourself: will the individual core on a dual core run faster than the one on a quad core? – artistoex – 2011-11-30T17:14:32.030

2This is done - look at Intel's new LGA2011 platform. – Breakthrough – 2011-11-30T17:31:00.523

1The clock speed has nothing to do with the number of transistors. However, with more transistors you can design instructions to take fewer clock-cycles to complete. – BlueRaja - Danny Pflughoeft – 2011-11-30T20:16:01.163

1Also, yield. The chance to get a defective chip is proportional to the area. If you made your dies big enough you will have to throw away most of the dies. This means a very low yield, which means a very high price. – drxzcl – 2011-11-30T23:02:46.840

See relevant topic here: http://electronics.stackexchange.com/questions/15848/why-cpus-are-becoming-smaller-and-smaller

– Kromster – 2011-12-02T12:54:07.977

3I disagree with the closed votes. There are clear reasons as to why making a bigger chips doesn't make sense as is shown by the top answers. So it isn't an opinionated question (like "Is android better than ios"). I was also interested by this question! – David Miani – 2011-12-17T08:04:50.650

Answers

18

Generally you're right: In the short term, increasing parallelization is not only viable but the only way to go. In fact, multi-cores, as well as caches, pipelining and hyper-threading are exactly what you propose: speed gain through increased chip area use. Of course, shrinking geometries does not collide with increasing die area use. However, die yield is a big limiting factor.

Die yield grows in inverse proportion to die size: large dies are simply more likely to "catch" wafer errors. If a wafer error hits a die, you can throw it away. Die yield obviously affects die cost. So there's an optimal die size in terms of costs vs. profits per die.

The only way to produce significantly larger dies is to integrate fault tolerant and redundant structures. This is what Intel tries to do in their Terra-Scale project (UPDATE: and what is already practiced in every-day products as Dan points out).

artistoex

Posted 2011-11-30T15:41:59.017

Reputation: 3 353

8In modern complex CPU/GPUs die defects often just feed into binning. Mid/upper level GPUs typically have a full die part and one or two that have a few sub-components disabled to get more price/capability points from fewer chip designs. The same is done with CPUs. AMD's tricore chips are quads with a die disabled, and intels LGA2011 chips are all 8 core parts. The full dies are only being used as Xeons. The 4/6 core i7-2011s are 8 core dies with parts disabled. If die errors fall in the right locations they are binned as cheaper parts. For more modular GPUs error rates set the low bin. – Dan is Fiddling by Firelight – 2011-11-30T19:25:05.017

@DanN Thank you, I've added this to my answer – artistoex – 2011-11-30T22:13:30.297

23

There are a lot of technical concerns (path lengths get too long and you lose efficiency, electrical interference causes noise), but the primary reason is simply that that many transistors would be too hot to adequately cool. That's the whole reason they're so keen to reduce the die size - it allows for performance increases at the same thermal levels.

Shinrai

Posted 2011-11-30T15:41:59.017

Reputation: 18 051

I should add that I mean in the context of a standard desktop/laptop machine, of course. – Shinrai – 2011-11-30T16:07:43.747

1Path lengths don't necessarily increase, they are a local thing: putting two cores on a chip won't increase the path length inside a core, will it? Heat dissipation will also distribute on a larger area, so that's not such a big problem, too. – artistoex – 2011-11-30T17:09:50.520

1Right, there's a lot of nuance, but I didn't feel getting into it was warranted. (I also don't necessarily mean in the context of MORE cores, since the question wasn't quite that explicit about that.) – Shinrai – 2011-11-30T17:16:37.643

The point is: multi-core processors are exactly what the OP proposed--speed gain through increased chip area use. – artistoex – 2011-11-30T17:29:40.743

Naively, one might assume that you could make BIGGER cores to get faster cores, which of course won't work. I'm intentionally being very non-detailed about this because it's a very broad question clearly being asked from a relatively nontechnical standpoint. – Shinrai – 2011-11-30T17:44:58.833

That's what hyper-threading is: bigger, faster cores. There are many other sub-core strategies to increase speed. – artistoex – 2011-11-30T17:54:57.597

Of course you're right, but you can't just make it bigger to make it faster, there's a bit more engineering involved than that. I think this discussion is getting out of scope of the question so I'll leave it be for now. This is where I'd normally offer to upvote your answer if you wanted to include all this stuff, but as I've already upvoted it I guess I can't do more than say "This stuff would be good in your answer" :) – Shinrai – 2011-11-30T18:05:18.277

3How do you figure hyper-threading is, "bigger faster cores"? Hyperthreading is all logic based and has nothing to do with size... Meaning if there is excess available on the current core it uses it. IE: if your MMX unit and FPU are in use on a given core you can still preform integer based calculations. – Supercereal – 2011-11-30T19:04:23.817

@Kyle Hyperthreading makes the core faster and bigger. It has to do with size insofar as it uses size. Many posters replied to the OP you can't make a CPU faster by doubling its size. But you can: e.g. by introducing hyperthreading. Turning size into speed is the whole point of all these inventions like caches, pipelineing, etc. – artistoex – 2011-11-30T20:03:50.107

@Kyle of course, I agree with you and Shinrai that much engineering effort had been put into this strategy. It took a big industry decades to achieve this level of sophistication. Size in itself does by no means turn into speed. – artistoex – 2011-11-30T20:33:41.000

According to your logic a core with hyperthreading is actually larger and faster? This is simply not true. Intel makes all their cpus in a given line of products off the same die/platter design and turns things off to bring it to spec for the particular model they are creating. Your 2500k has the same size core as a 2600k the 2600k just has hyperthreading logic turned on... It has nothing to do with size or speed in fact hyper threading slows down certain things that rely heavily on level 2/3 cache. – Supercereal – 2011-11-30T20:41:52.250

@Kyle - I think artistoex's point is that a comparatively larger die design makes the engineering for hyperthreading feasible - ie, if you wanted a hyperthreaded 386 it would be the size of a hard drive because the transitor density is so low that you'd HAVE to make it bigger for the extra logic. The fact that our processors are comfortably sized to allow for it without enlarging them from otherwise equivalent designs is a happy coincidence, I think. Or I might just be reading him wrong. – Shinrai – 2011-11-30T20:50:27.073

@Kyle I'd say that's splitting hairs. A core with hyperthreading turned off is larger and potentially faster. Its just amputated. – artistoex – 2011-11-30T20:51:56.293

@Shinrai Yes, exactly. Size of course is the result of a compromise, not coincidence. Why don't we make 386s for desktops anymore? Produced in current-day technology, they would occupy 2000 times less space. Of course we make chips as big as we can, and use the space to gain speed. – artistoex – 2011-11-30T21:00:52.000

I still insist hyperthreading has absolutely nothing to do with core size... I'm sure the logic unit has to fit on the die/platter somewhere so I will concede that it makes the die/platter insignificantly larger but ONLY to hold the logic controller not the the actual core itself. – Supercereal – 2011-11-30T21:10:19.277

Sadly we need to go back a bit to compare a line of processors from intel that didn't already have hyper threading built in just turned off. To see exactly what the addition of hyper threading has added: http://www.pcmag.com/article2/0,2817,1155017,00.asp As you can see the core size is the same they just strapped some units for logic on there for a total size increase of 5%... Otherwise this is your standard northwood processor (which were released prior to hyperthreading)... I'm just saying that the phyiscal size of the processing core has nothing to do with hyperthreading.

– Supercereal – 2011-11-30T21:18:28.293

15

Several of the answers given here are good answers. There are technical issues in increasing the size of the CPU and it will lead to a lot more heat to deal with. However all of them are surmountable given strong enough incentives.

I would like to add what I believe is a central issue: economics. CPUs are made in wafers like this, with a large number of CPUs per wafer. The real manufacturing cost is per wafer, so if you double the area of a CPU you can only fit half as many on a wafer, so the per-CPU price doubles. Also, not all of the wafer always comes out perfect, there can be errors. So doubling the area doubles chance of a defect in any specific CPU.

Therefore from the economic standpoint the reason they are always making things smaller is to get better performance/mm^2, which is the determining factor in price/performance.

TL;DR: In addition to the other reasons mentioned doubling the area of a CPU more than doubles the cost.

Mr Alpha

Posted 2011-11-30T15:41:59.017

Reputation: 6 391

This is the main reason. Chapter 1 of Hennessy and Pattersons's Computer Architecture textbook describes the fabrication process and the considerations that go into driving CPU dies to be as small as possible.

– Steve Blackwell – 2011-11-30T21:48:01.827

3

Adding more transistors to a processor doesn't automatically make it faster.

Increased path length == slower clock rate.
Adding more transistors will increase the path length. Any increase has to be used valuable or it'll cause an increase in cost, heat, energy, but a decrease in performance.

You can of course always add more cores. Why don't they do this? Well, they do.

user606723

Posted 2011-11-30T15:41:59.017

Reputation: 1 217

I don't really consider this off-topic here (although it would be on-topic there as well). – Shinrai – 2011-11-30T19:54:12.967

Yeah, I agree. I just think that it would be better answered there. I removed the line. – user606723 – 2011-11-30T19:58:40.303

2

Your general assumption is wrong. A CPU with a double sized die does not mean it can operate with double speed. This would only add more space for adding more cores (see some Intel manycore chips with 32 or 64 cores) or larger caches. But most of the current software can not make use of more than 2 cores.

Therefore the increased die size increases the price massively without a gain of the same height. This one of the (simplified) reasons CPUs are as they are.

Robert

Posted 2011-11-30T15:41:59.017

Reputation: 4 857

This is not quite true - with more transistors, you could decrease the propagation-depth so instructions take fewer clock-cycles to complete. You're right that it has nothing to do with clock speed, though. – BlueRaja - Danny Pflughoeft – 2011-11-30T20:19:50.597

1

In Electronics SMALLER = FASTER 3GHz needs to be much smaller than 20MHz The larger the interconnections, the greater the ESR and the slower the speed.

Doubling the amount of transistors doesn't double the clock speed.

Fiasco Labs

Posted 2011-11-30T15:41:59.017

Reputation: 6 368

Increasing the clock speed is only one approach to speed gain. Doubling transistors is another one. Apart from that, shrinking interconnections does not conflict increasing die area. – artistoex – 2011-11-30T17:39:08.710

3@artistoex, but simply doubling the transistors doesn't make it faster either. It needs to be engineered in a way that will take advantage of those transistors. More transistors (with the same mm) means a lower clock typically. – user606723 – 2011-11-30T19:29:42.207

1

The cost of producing the raw wafers is a factor. Monocrystalline silicon is not free, and the refining process is somewhat expensive. So using more of your raw material increases cost.

steampowered

Posted 2011-11-30T15:41:59.017

Reputation: 2 109

0

Big living things, artificial or not, like dinosaurs, are loosers. The ratio area / volume is not fair for their survival : too many constraints about energy - every form - in and out.

Massimo

Posted 2011-11-30T15:41:59.017

Reputation: 145

0

Think of a CPU as a network of connected nodes (transistors). In order to provide more capabilities the number of nodes and the paths between them increase to a degree, but that increase is linear. So one generation of a CPU might have a million nodes, the next might have 1.5 million. With miniaturization of the circuit, the number of nodes and paths are condensed into a smaller footprint. The current fabrication processes are down to 30 nanometers.

Let's say that you need five units per node and five units distance between two nodes. End to end, in a straight line you can create a bus of 22222 nodes in 1 CM of space. You can make a matrix of 493 million nodes in a square CM. The design of the circuit is what contains the CPU's logic. Doubling the space is not what increases the speed, it just would enable the circuit to have more logical operators. Or in the case of multi-core CPUs to allow the circuit to handle more work in parallel. Increasing the footprint would actually decrease the clock speed because the electrons would have to travel longer distances through the circuit.

Michael Brown

Posted 2011-11-30T15:41:59.017

Reputation: 295