Historically, multi-CPU motherboards were generally graded “server” (i.e. expensive). Now that multi-core is pretty much defacto, you’d expect to be seeing multi-cpu motherboards in the desktop/workstation grade.
But most dual-cpu motherboards seem to be labelled “server” still. The pricing is coming down, although still significantly higher than single CPU motherboards.
My hunch is that when desktop performance consumers spent $500 on a motherboard, $2000 on a pair of i7 extremes and $500-$1500 for sufficient cooling, they’d be kinda upset when they found performance to be about the same as a single CPU, possibly even slower in many cases.
Uncovering why would damage Intel/AMDs calm. Perhaps blow the lid on the shameful state of current-gen multicore CPUs. No lengthy explanation this time, lets just say that I view the “Core i” gen CPUs as an alpha/beta market test.
What gets me is that multiple-CPUs ought to be hot on the todo list of the CPU vendors unless they really are desperately trying to pretend the GPU market doesn’t exist.
Multiple CPUs on a motherboard are expensive because the motherboard has to implement some of what’s going on inside multi-core CPUs: memory and cache sharing.
It’s the hardware version of the same problem software has with multi-core: the cores/CPUs are independent with almost zero facilities for interop.
It boils down to this: If you dig up an old enough version of Windows, one that doesn’t do “SMP” (Symmetric Multi-Processors) and put it on your shiny multi-core CPU, it will use one core and one core only; all of the applications you start up, even multi-threaded multi-core applications, will run on that one core (or go very horribly wrong as all the cores try to run the same code at the same time … like they are unaware of each other). Multi-core is not a switch you flick, but an activity that the operating system and core must undertake. Turning on a light (flick) vs making coffee.
If the CPU vendors don’t shape up soon, GPU processing is going to put a serious hurt on them. Making the best use of multi-core CPUs means writing code that works well in parallel. The overhead of doing this on a GCPU platform is tiny because they are parallel compute environments: there is a supervisor that you feed with descriptions you need doing, and the task of then executing all that work is handled for you in hardware by that supervisor.
Adding more cores to a current-gen CPU means more CPU operations scheduling/distributing/organizing the co0perative execution of any sequence of instructions. Adding more cores to a GCPU doesn’t.
While this gap remains un-bridged, the benefits of slapping extra CPUs onto a motherboard are marginalized for most applications. Only server systems, where you anticipate running lots and lots of independent, non-cooperating, processes in parallel, are likely to benefit.
Multi-core CPUs were, IMO, mean’t to drive software and hardware development towards parallelization. But today’s multi-core CPUs are bloody awful parallel compute environments into which the GCPU was sucked, inadvertantly.
Maybe Intel or AMD should consider looking at a new socket architecture, one which allows for a primary PCO CPU (parallel/co-op), and secondary POP (parallel-only processing) core farms (equivalent to the compute farm of the GPU) bridged by a supervisor that allows machine-level operations to dispatch workloads to the compute farms.