cpus

Computer, Operating and Programming.

If we’re ever going to turn the corner from single-processor computing to massively parallel and/or distributed computing, we need 3 big changes that must happen cooperatively: 1) A new instruction set, 2) a new programming language and 3) a new OS.

For the last decade, each of these critical components has evolved in a vacuum, some times seemingly in spite of each other.

Put in car terms: (CPUs) Tire manufacturers have “innovated” putting four tires on each wheel, giving you four times as much traction and therefore potentially four times as much speed; (Languages) engine manufacturers made engines greener to the point spiders live in them (while hoping you didn’t notice diesel engines get 2x the mpg); (OSes) car manufacturers have concentrated on making the vehicles look prettier in a larger variety of car parks.

Each is hamstrung by the demands and and constraints of the next to produce a net imperfection.

Multiple cores, why not CPUs?

Historically, multi-CPU motherboards were generally graded “server” (i.e. expensive). Now that multi-core is pretty much defacto, you’d expect to be seeing multi-cpu motherboards in the desktop/workstation grade.

But most dual-cpu motherboards seem to be labelled “server” still. The pricing is coming down, although still significantly higher than single CPU motherboards.

My hunch is that when desktop performance consumers spent $500 on a motherboard, $2000 on a pair of i7 extremes and $500-$1500 for sufficient cooling, they’d be kinda upset when they found performance to be about the same as a single CPU, possibly even slower in many cases.

Uncovering why would damage Intel/AMDs calm. Perhaps blow the lid on the shameful state of current-gen multicore CPUs. No lengthy explanation this time, lets just say that I view the “Core i” gen CPUs as an alpha/beta market test.

The problem with multiple CPU cores…

Computers are based on sequences of 1s and 0s; bits. By chaining these together, you can form a vocabulary of instructions from a sort of tree. E.g. the first bit is either ‘close’ (0) or ‘open’ (1), and the second bit is either ‘gate’ (0) or ‘door’ (1). So, 00 is ‘close gate’, 10 is ‘open gate’ and 11 is ‘open door’.

CPUs used fixed-sized sequences of bits to represent their internal vocabulary like this, the result is called the instruction set. These machine instructions are usually incredibly simplistic, such as “add” or “divide”.

Typical computer programs are long sequences of these machine instructions which use math and algebra to achieve more complex goals. This is called “machine code”.

Very few programmers still work in machine code; we tend to work in more elaborate languages which allow us to express many machine code instructions with a single line of text, and in a slightly less mind-bending way. This is “program code”.

Think of it this way: bits are letters of the alpha bet; machine code instructions are the vocabulary or words of the language. Computer languages are the grammar, dialect and syntax one uses in order to communicate an idea via the computer.

At first, CPUs got faster, so the time each word took to process got shorter, making programs run faster.

Then that stopped and the manufacturers started slapping in more CPU cores.

But more CPUs does not equal more speed. Infact, it suffers from chefs-in-the-kitchen syndrome…