16
6
With lowering silicon costs and rising consumer needs, manufacturers seem to be pushing one of two things: clock speed and/or core count. With the way things are going, it doesn't seem that clock speed of processors is rising anymore, but the number of processor cores.
I remember only a few years back, I had a nice fast single-core Pentium 4 processor. Fast-forward to today, and I don't think you can even purchase a single-core processor (not to mention the rising increase in multicore processors even in cellphones). The way things are going, we might find computers with hundreds of cores in a few years (and I know many operating systems already have support for it).
Is it more beneficial to a system's overall performance to increase the clock speed, or increase the number of cores? Assume we're getting into hundreds of cores all running together, or clock speeds ten times higher what we have today (regardless of whether or not that is physically possible).
What are some examples of common processes (e.g. encryption, file compression, image/video editing) that will most benefit from one or another? Are there some processes which can be, but currently aren't (due to technical reasons) sped up by increasing their parallelism?
Assume the hypothetical processor has the exact same core design (word size, address bit width, memory bus size, cache, etc...), so the only variables here are clock speed and core count. And again, I'm not talking about one, two, or even four cores - imagine tens to hundreds.
9It's all going to depend on what you want to do on that computer. Multiple cores are good for some things, higher clock speeds for others. – ChrisF – 2011-08-17T19:30:16.660
@ChrisF I personally know the answer, but I'm asking this for two reasons. The first is to have this information on the website (I've only seen it asked in relation to dual or quad core processors), and the second is to try to give people an idea of what's to come "in the future" and to show what the applications are of both sides of the equation. – Breakthrough – 2011-08-17T19:32:57.457
It would be better to rework the question a bit. At the moment it reads like a "list of X" question where each answer is equally valid (especially 'cos of that last sentence). – ChrisF – 2011-08-17T19:34:18.420
The general answer is "yes". Or perhaps "maybe". Processor speed is really limited by memory access speed -- the effective MIPS rate of a processor is typically 10-30% of the max rate, due to memory delays. Multiple processors can both help and hurt this situation, depending on the memory subsystem design and the type of applications being executed. I recall one case where adding a second processor increased throughput by only about 10% for the average workload, due to memory contention. – Daniel R Hicks – 2011-08-17T19:35:01.057
1
Related: CPU Cores: The more the better?
– slhck – 2011-08-17T19:35:50.897@ChrisF updated the question to try to direct the flow a bit more. This is a very abstract topic, and again, I want people to try to think "towards the future". Imagine +20 GHz clock speeds versus 128 cores. Obviously we have to take into account Amdahl's law (and I would expect it to show up in at least one answer), but that law also makes some assumptions about the workload.
– Breakthrough – 2011-08-17T19:38:03.937@DanH then that also shows another implicit problem - processor cache. Obviously some (but not all) memory delay problems can be solved by an increased CPU cache, but what if a computer had multiple memory controllers (all with multiple, segregated amounts of RAM accessible by a single core) that could interface with a central "datastore" of memory (accessible by all cores)? AFAIK, nothing like this exists yet, but this is the kind of thinking I want to see in the answers (solutions to tomorrow's problems, basically). – Breakthrough – 2011-08-17T19:40:55.023
I appreciate the thought here but I'm sort of onboard that this question is way too broad to really be good, although the edits help a lot. This is really getting into computational complexity, though, and might honestly be better at math.SE maybe? The meat here really boils down to the last paragraph - what's the effect of parallelism on certain types of computations? – Shinrai – 2011-08-17T19:42:07.447
@Shinrai I will admit the thought crossed my mind when I was posting this, but felt it had a better fit here at SuperUser. I understand if this should be closed for being too broad, but would it also be worth considering making it part of the community wiki? – Breakthrough – 2011-08-17T19:43:51.760
3I would say that while this is a good question for talking over while having a pint, it is pretty much not a good stack-exchange question. – EBGreen – 2011-08-17T19:46:27.920
1There are too many variables, what ifs and other parameters + ongoing technology changes to develop a succinct answer that will be relevant for more than a specific period of time. This is an interesting topic for a forum or blog, but not as something to be pinned down as an 'answer'. I have voted to close for this reason so let the flaming begin!!! – Linker3000 – 2011-08-17T20:24:51.520
@Breakthrough -- Ultimately cache is just another layer if memory and another bottleneck -- the MIPS rates I quoted assumed cache. Most MPs have 2-3 layers of cache, in addition to "main store". And all manner of NUMA configurations exist, some with a common backing store for the multiple processors, some where each processor has an independent store but they "steal" from each other, etc. – Daniel R Hicks – 2011-08-17T21:07:10.547
@Linker3000 no flaming here, you have a valid point. Hopefully this question can be further explored in the future (depending on how our technology progresses). Regardless, I think everyone who looked at this question should read the following news article (it's really cool): IBM produces first working chips modeled on the human brain.
– Breakthrough – 2011-08-19T00:26:36.320