Home > Coding, General, Rants & Opinions > Computer, Operating and Programming.

Computer, Operating and Programming.

If we’re ever going to turn the corner from single-processor computing to massively parallel and/or distributed computing, we need 3 big changes that must happen cooperatively: 1) A new instruction set, 2) a new programming language and 3) a new OS.

For the last decade, each of these critical components has evolved in a vacuum, some times seemingly in spite of each other.

Put in car terms: (CPUs) Tire manufacturers have “innovated” putting four tires on each wheel, giving you four times as much traction and therefore potentially four times as much speed; (Languages) engine manufacturers made engines greener to the point spiders live in them (while hoping you didn’t notice diesel engines get 2x the mpg); (OSes) car manufacturers have concentrated on making the vehicles look prettier in a larger variety of car parks.

Each is hamstrung by the demands and and constraints of the next to produce a net imperfection.

The x86 architecture and instruction set is single threaded. Your multicore CPU is essentially a hack. Today’s predominant languages are fundamentally single core. Multithreading is essentially a magic trick. Operating systems have to combine these two clusterfraks in the best way they can to take advantage of them. Today’s multitasking is essentially still a gimmick.

We need a new language, call it CN, which starts with disposal of a critical deadweight hanging round the neck of modern languages: the human readable source file.

Text files are inherently sequential, and yet … here we are, still trying to write complex parallel computer programs in them.

I’m not talking about graphical programming, which is always going to be problematic because of the need to input things like names, values and text strings. But it’s time to cast out the notion of programming in an 80×25 text window and build a language on the assumption of a graphical input environment.

It’s worth remembering that the “C” language was born hand-in-hand with Unix; “C#” is very much the language of .NET.

The next generation of computing needs a “CN” to champion it; only this time it needs to incorporate the hardware developers to be successful.

The operating system needs to have control over what resources a piece of code can use, so it needs much better integration with the hardware: the OS kernel deserves it’s own piece of silicon so it has capabilities that software doesn’t: in a parallel CPU the OS needs constrictive abilities while application software and drivers only need selective abilities.

Our new language needs instruction, block and thread level parallelism and concurrency, done so that a typing programmer can interact cleanly with a graphical depiction of the fact that Stuff Doesn’t Happen Sequentially.

In the same way that Microsoft’s Visual C# 2010 is able to show you all kinds of programming and compiler errors – literally – as you type, we need instant feedback on issues of contention and sequentiality. We also need much better ways to asynchronize code, implementing things like coroutines.

Today’s languages are full of stuff that makes them hard for humans to interpret on reading, the most classic of which is early termination of a sequence of code.

if ( something )
{
  if ( something else )
    print("Hello\n") ;
  else if ( something different )
  {
    if ( blue moon )
      return 42 ;
    else if ( red moon )
      throw exception("unhandled moon type") ;
    else
      x = 3 ;
  }
  else
    x = 2 ;
}

printf("Greetings!") ;

Exceptions get their own special disdain from many programmers, but IMO they’re as evil as “return”, “continue” and “break” which in turn is are evil as “goto”: each disrupts the simple, sequential, logical flow of a piece of code.

These kinds of “disruptive” code need to be far more cleanly annotated – if not expressed:


1
if ( something )
{
  if ( something else )
    print("Hello\n") ;
  else if ( something different )
  {
    ---flow---------------------------------------------------------------------------
      RETURN 42  WHEN ( blue moon ) ;
      THROW exception("unhandled moon type") WHEN ( red moon ) ;
    ---------------------------------------------------------------------------flow---
    // Resume doing what we are intending to do.
    x = 3 ;
  }
  else
    x = 2 ;
}

OK: That needs a lot of work.

But if you’re developing a smart compiler, there’s no reason that you couldn’t be more creative, such as creating a visually separate space in which exceptional conditions are placed, and which the compiler/executable builder subsequently finds the most efficient/reliable place to inject said piece of code.

Most importantly, this allows for the same sorts of separation of concerns that other sectors of development (e.g. web, XAML, etc) have begun to see.

Consider the following (as a very very preliminary concept)

Off the bat: there are a bunch of things that’ll make you want to go “ick” about this, if you’re a traditional programmer.

SUCK IT UP, OLD FART.

The first is going to be that there is code written before there are any variable declarations. Take the line

no_such_player(playerID) when player == null;

My concept here is that, with  CN you are using a smart IDE/compiler combination. Details about instruction ordering are largely a matter for the system to worry about. In short, it says “whenever I assign something to player, inject this test”.

There is a really crucial change here, in that the source code becomes it’s own form of P-code, into which the compiler/editor can actually inject all kinds of build-time functionality. Knowing that it is going to be blocking null assignments to player, it can be smart about where it performs the checks, rather than needing to just blindly always check.

Which means you get to write defensive code without doing it defensively.

In this rough mock up, you write the ideal code-flow, and it becomes the IDE/compiler’s job to annotate where code flow might be disrupted. For example, it needs a clear, graphical metaphor to indicate that “player = Player::Lookup” might cause the routine to end.

Like wise, the “returns” block defines operations that may result in the function returning to the caller, which need to be clearly annotated.

The language will call for additional mechanisms to easily prevent that kind of thing. Lets say you have a piece of code that is set to return false if the value of “player” becomes null, but you want to write code that ensures stray values don’t linger. For that you might need a new variant of “=” that prevents or defers checking of player to a later point.

For that, I propose “_=” which would be depicted as ≡.

returns:
  // If player is set to null at any point, fail.
  false if player == null;
  // Compiler can eliminate foundPlayer because
  // we only use it for eliciting a return.
  true if foundPlayer == true;
  true otherwise;

code:
  Player* player ≡ null; // I typed = but clever compiler switched it.
  while ( something )
  {
    player ≡ null; // get rid of old version but don't return.
    if ( monday ) player = MondayPlayer(something);
    else player = OtherPlayer(something);
    if ( player->IsHappy() )
      foundPlayer = true;
  }

Perhaps the IDE would render “foundPlayer = true;” in italics to denote a clause that can result in exit of the block, and perhaps it would render it in bold to indicate it will always cause it to exit.

Voila: Visual coding without having to revert to drawing flow charts.

Of course, this attempt at a tentative description is flawed through its over-proximity to C. You probably don’t need to say “foundPlayer = true;” but rather something simpler like “foundPlayer!”.

With this basic concept, you can start to go about devising methods to display parallel code, or code that will parallelize (it would be nice to have the IDE/compiler hint to you at code that doesn’t run into dependencies, as well as allowing you well understand markup methods for indicating that you want synchronization.

The language, though, needs to be developed hand-in-hand with both the underlying CPU instruction set/architecture and the operating system it’s going to run under. Why? Because both things will drastically alter the resulting language. We’re a long way from DOS, and the operating system is very much an abstraction layer between the little black box in which application runs and the environment in which it lives (devices, network, etc), if the language doesn’t natively support concepts like parallelism and security, you wind up having those concepts as extensions and advanced materials. But the CPU is going to affect how any of those things can be achieved efficiently. There are languages out there that have those kinds of concepts built in, but they tend to be highly inefficient because you either do it the operating system’s way or the CPUs way, and each seems to be in contradiction to the other.

  1. easting
    March 19, 2011 at 6:43 pm | #1

    0100110101111001001000000110100001100101011000010110010000100000011010000111010101110010011101000111001100101110001000000010000001000010011000010110001101101011001000000111010001101111001000000110001001101001011011100110000101110010011110010010110000100000010010010010000001110011011000010111100100100001

  2. Coubo
    March 20, 2011 at 2:55 am | #2

    Good article. Yes current language and CPU environments are broken. It’s mainly because a few years back, people thought we would continue to increase the speed of single CPU but the last 5 years showed that increasing commercial CPU speed did not work that well. We thought we would be able to have 15Ghz CPUs in 2011 but we have 4×3.6Ghz. Langage and OS did not follow, and the key reason why they didn’t is that it’s actually very complex.
    I would even go further in the spirit “code can no longer be text”, a true multi-thread program should be represented a graph with nodes being pieces of code and links some logical interaction between the node redirecting data (“return”) or logical (“if”). Of course this may also create a huge burden on the programmer to implement and also it is hard to make efficient as we don’t know how many CPU the code will eventually run on.

    For example, if I know I run with 4 CPU and I need to work on an image rendering/treatment software, I may decide to cut my image in 4 quadrants and then have each CPU run on each quadrants at the same time. Then I may need to have further processing to “connect” the 4 pieces of images and have some further processing on each of the 4 frontiers between my 4 pieces of image. Pretty efficient. However if I only know I run on “N” CPUs, it quickly becomes nightmarish “how to separate the processing”, “how to implement the processing to reconnect my pieces of image together”. This of course possible but if you think about the huge amount of time you have to spend designing the “N” code, then it’s pretty depressing.

    But the reality is that the need of such performance is limited to some “critical” applications such as kernel, video drivers, power hungry games etc…

  3. Tuure Laurinolli
    March 20, 2011 at 4:51 am | #3

    Have you looked at the actual efforts at parallelism-friendly languages, such as X10 or Fortress?

  4. Clifford Adams
    March 20, 2011 at 9:44 am | #4

    First of all, I want to say that your “rants” like this one are the main reason I keep reading your blog. I used to be a rather strong idealist myself, and I even had the fun of implementing some of my big ideas. None of my ideas changed the world, but a couple of my programs made a few thousand people a bit happier for a few years.

    Your language ideas about getting away from text/source file programming remind me strongly of Smalltalk, even the new _= idea. (Smalltalk used to use a single-character left arrow like <- for assignment, but it caused lots of difficulties.)

    I highly recommend looking briefly at Squeak–a Smalltalk implementation for Windows (and Macs/Linux too). Only 70 megabytes for the whole all-in-one distribution, and a huge part of that is the full source code. See http://www.squeak.org/ if you want to try it. (No install needed, just unzip and run the .exe)

    Finally, take a look at http://en.wikipedia.org/wiki/Open_Cobalt (written in Squeak). It looks like it implements some of the ideas you're talking about. For example, it has an interesting distributed synchronization system: http://www.opencobalt.org/about/synchronization-architecture

    Thanks (?) for rekindling my interest in Squeak yet again… :-)

  5. March 20, 2011 at 5:06 pm | #5

    They’re fundamentally hampered by the OS APIs they have to interact with, particularly for parallelism, which is an artifact of their evolution over single-core and SMP architectures.

  6. March 20, 2011 at 5:08 pm | #6

    Also, what they largely seek to do is facilitate programmer specification of parallelism. They’re still incapable of atomic parallelism, because the facility just isn’t provided to them.

  7. March 21, 2011 at 1:55 am | #7

    @Coubo

    I agree about graph representation — and I can envision that as a side-panel view that can be expanded for browsing, but is omipresent while editing the code segments.

    Apply that concept to a “smart” combined IDE/compiler, and the burden you’re concerned about begins to fade.

    However: I’m leery of “graphical” (as opposed to merely visual) editing, because it’s something someone is always trying but it ultimately fails due to the tediousness of switching from keyboard input – e.g. for identifiers, values, text etc – and graphical editing.

    As to the performance aspect, it’s not entirely true. Most applications have deep routines or guts that need to be efficient; whether it’s simply a routine for loading from disk, or whatever.

    And if it is possible to make efficiency – not extreme optimization – easy, then it is possible to support the ongoing growth of the active-application ecosystem (that is: the large assortment of applications most people have running as services, background tasks, etc). It also makes the cloud more viable because the less resource your trivial app uses, the more work it can get done on spare cycles “out there” :)

    Again, if the compiler is more involved in the creation of the code, e.g. by making small optimizations by eliminating un-needed defensive programming tests, you continue to move closer to that.

    (c.f. if you call a routine that returns a pointer, specifying the behavior when that routine returns NULL is a good thing, and that should remain in the source, but upon determining that the routine can never return null, the compiler can eliminate the test and conditional)

    @Clifford

    I’m a language junkie: I actually taught myself C in the first place, back in 82 or 83, so that I could write a language for writing MUDs :)

    But I’ve become disillusioned of late because each new language I’ve looked at increases the sense of pressure I feel for a refresh of the complete trinity of cpu-os-language.

    CPU? Because it largely dictates the wiles of the OS, the weight of the OS, and the ease of parallelism. (The techniques required to get code to run across multiple CPUs at the moment is currently akin to having to send yourself a text message saying “get out of car and push”, waiting for it to arrive, reading it, and then getting out of the car to push).

    OS? Because it determines the API and ABIs and thus imposes a lot of considerations on the language implementation and therefore design.

    Language? Because we need one that is designed from the ground up with the CPU+OS changes, that abandons the text-file roots, and that is dependent upon the assumption of a visual editing system so that it can say “that will be automated”, “the programmer will not have to be responsible for that”, etc, etc. But it should also avoid the usual issue of graphical languages which generally only let you do what right clicks and dragging will let you do.

    Again, my example of post-assignment validation. It should be an error to have an unvalidated assignment without further programmer input, but that input shouldn’t have to consist of the defensive programming strategy of testing values.

    Rather: the programmer should be able to provide validation constraints almost as markup, as information to the IDE/compiler.

    You could achieve the kind of visual editing with MSIL or some other form of byte-code; but that doesn’t address the other issues.

    However by having the IDE/compiler sitting so close between code and op-codes you get another possibility: translation.

    Perhaps the IDE could directly import/translate code from other languages into the format used by the editor; and perhaps it could actively translate code between languages for display; and perhaps it could actually let you type code in C and via on-the-fly translation convert it into the current language for you.

  8. madrebel
    April 1, 2011 at 2:10 am | #8

    so what is/are intel and amd doing about this? they seem to have the most to gain from parallelism.

  9. kilemall
    February 18, 2012 at 12:01 am | #9

    Guess I would think in terms of runtime optimization, the various threads breaking out as the OS can ‘read’ or has ‘learned’ that X instructions coupled with Y program is most efficiently run as # threads at this breakout point and the mama thread handles synchronization issues.

    SO much depends on total workload of a given machine, not just the structure of the given program you are writing.

    Disclaimer- I’m coming at this as a mainframer, in my weird world we actually scaled back from 3 engines to 2, used less MIPS but got faster runtime results. We have tools such as WLM for resource management and MIM for sharing files across systems and entirely different boxes.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 183 other followers

%d bloggers like this: