Multiple cores, why not CPUs?

Historically, multi-CPU motherboards were generally graded “server” (i.e. expensive). Now that multi-core is pretty much defacto, you’d expect to be seeing multi-cpu motherboards in the desktop/workstation grade.

But most dual-cpu motherboards seem to be labelled “server” still. The pricing is coming down, although still significantly higher than single CPU motherboards.

My hunch is that when desktop performance consumers spent $500 on a motherboard, $2000 on a pair of i7 extremes and $500-$1500 for sufficient cooling, they’d be kinda upset when they found performance to be about the same as a single CPU, possibly even slower in many cases.

Uncovering why would damage Intel/AMDs calm. Perhaps blow the lid on the shameful state of current-gen multicore CPUs. No lengthy explanation this time, lets just say that I view the “Core i” gen CPUs as an alpha/beta market test.

What gets me is that multiple-CPUs ought to be hot on the todo list of the CPU vendors unless they really are desperately trying to pretend the GPU market doesn’t exist.

Multiple CPUs on a motherboard are expensive because the motherboard has to implement some of what’s going on inside multi-core CPUs: memory and cache sharing.

It’s the hardware version of the same problem software has with multi-core: the cores/CPUs are independent with almost zero facilities for interop.

It boils down to this: If you dig up an old enough version of Windows, one that doesn’t do “SMP” (Symmetric Multi-Processors) and put it on your shiny multi-core CPU, it will use one core and one core only; all of the applications you start up, even multi-threaded multi-core applications, will run on that one core (or go very horribly wrong as all the cores try to run the same code at the same time … like they are unaware of each other). Multi-core is not a switch you flick, but an activity that the operating system and core must undertake. Turning on a light (flick) vs making coffee.

If the CPU vendors don’t shape up soon, GPU processing is going to put a serious hurt on them. Making the best use of multi-core CPUs means writing code that works well in parallel. The overhead of doing this on a GCPU platform is tiny because they are parallel compute environments: there is a supervisor that you feed with descriptions you need doing, and the task of then executing all that work is handled for you in hardware by that supervisor.

Adding more cores to a current-gen CPU means more CPU operations scheduling/distributing/organizing the co0perative execution of any sequence of instructions. Adding more cores to a GCPU doesn’t.

While this gap remains un-bridged, the benefits of slapping extra CPUs onto a motherboard are marginalized for most applications. Only server systems, where you anticipate running lots and lots of independent, non-cooperating, processes in parallel, are likely to benefit.

Multi-core CPUs were, IMO, mean’t to drive software and hardware development towards parallelization. But today’s multi-core CPUs are bloody awful parallel compute environments into which the GCPU was sucked, inadvertantly.

Maybe Intel or AMD should consider looking at a new socket architecture, one which allows for a primary PCO CPU (parallel/co-op), and secondary POP (parallel-only processing) core farms (equivalent to the compute farm of the GPU) bridged by a supervisor that allows machine-level operations to dispatch workloads to the compute farms.

 

 

4 Comments

Dude… I read this stuff and sometimes I am just like… what?

The primary CPU / parallel unit approach sounds like Cell. Theoretically very powerful, but rather difficult to take advantage of. Maybe in 7-10 years or so something like that will actually be commonplace.

How do current CPUs make especially awful parallel computing environments, though? Obviously they are not as good at trivially parallelizable number crunching as GPUs, but that’s not what they were designed to do either. The same techniques still apply on both if you want to maximize performance, so the current CPUs *do* steer programmers in the right direction for future, hyper-parallel platforms.

On the whole you can only take advantage of parallel processing if you have lots of independent, non-cooperating processes. That’s what GPUs do. The graphics workloads just happen to have a ton of those, as do servers for which each client is essentially independent etc.

Multicores need less HW than parallel CPUs, also they can easier share resources which can mean less overhead in our code (process allocates threads over the (even virtual)-cores). lol, actually i want one Core for each Thread <- in one process :D
I don't think that we will see parallel CPUs in enduser market next years since enduser don't work parallel like a server or brute-force-math-machine do: they want that the application they work with is running smoth with alot of colors, the computer shouldn't produce noise and not too much costs for electricity.
I think that every single-CPUs will leave or take a special market like the good-old 68k-CPU as µC. Meanwhile with xGHz the physical dimensions of a CPU get critical against the signal-speed (lightspeed over milimeters). I hope that 4xMulticores are just the beginning, that we can get around 10mV Core-Voltage to produce that less thermal power to stack a 256Multi-Core-Cube in one CPU :D

Hope you’re feeling better soon, kfsone.

Leave a Reply

Name and email address are required. Your email address will not be published.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

You may use these HTML tags and attributes:

<a href="" title="" rel=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <pre> <q cite=""> <s> <strike> <strong> 

%d bloggers like this: