So you’ve seen the “To the cloud!” adverts, and I tried to explain yesterday what that’s about.
Since missing the initial Internet Bandwagon many years ago, Microsoft has been rushing to catch up. As they were closing the gap, the mobile device began to flourish as did mobile apps like Google Docs.
This tossed a new tack onto the tarmac, and persuaded Microsoft to shift into catchup overdrive, and they seem to have really leaped aboard the early run of The Cloud bus…
CPU, RAM and disk manufacturers have really failed to keep up with demand in the last few years.
After it became clear that the Itanic was going to sink and Intel realized they were about to enter a “nothing new here” vacuum, they chose to flip us the multi–core birdie: x86 compatible multi-core CPUs are somewhat smoke-and-mirrors.
Our desktop multi-core CPUs have been a travesty. You think this is what you’re getting:
Second CPU – that’s gonna double your processing power, right? But no, because the cores are just more of the same, so what you’re actually getting is
Itanium scared Intel off of developing a new chip with new machine code instructions, so the Core series CPUs are akin to upgrades of the old Pentium in so much as they are essentially the same instruction set … in particular, one lacking any kind of parallel-processing features.
By contrast, when Intel brought us the Pentium, one of the big features was that it used multiple processing “pipelines” to let it perform on-the-fly parallelization of code a few instructions at a time.
Which meant that you could take an off-the-shelf piece of software and … it just magically went faster.
But Core CPUs don’t do anything of the sort, and despite continued money funneled into Itanium it’s going less than nowhere. Unless the program has been carefully crafted in a very specific fashion by the developers, it will go as fast – or slower – than it went on a single core CPU :(
CPUs haven’t picked up any speed now in nearly 6 years:
(Note: those are CPU release speeds, not maximum available speed; Intel pushed as far as 3.8Ghz in early 2005 before nose-diving with slower-clocked multi-core CPUs)
By contrast, computer software has continued to get more sophisticated, more complex, and more resource demanding: Developers having been counting on that continued CPU, memory and disk increase. This disparity between supply and demand hasn’t gone unnoticed.
When Core 2 CPUs first slipped onto the scene, I think most developers saw them as an anomaly, for specialist users doing things like video rendering, etc. I seem to recall a very strong prevailing belief that this was a short term gimmick or a blip to cover the CPU vendors’ asses while they cranked out the next speed upgrade.
The transition to multi-core awareness is still on-going, and the transition to effective and efficient multi-core programming is pretty much in it’s infancy for more established developers (read: old dogs). Heck, the majority of traditional C/C++/Java/etc programmers still don’t realize that calls to malloc(), calloc(), in any kind of threaded process impose hidden mutex overheads, making your application less thread-friendly.
It’s also not an easy transition to make, which serves as an incentive for those kinds of developers to consider migrating to the simpler world of mobile development – devices which generally run on single cores or CPUs that are still in the “pipelining” era of doing the work for you at runtime – rather than trying to figure out how to multi-thread their application.
It also makes the cloud seem more appealing, but only because people are still getting to grips with what the cloud is. Well-written cloud applications can deliver phenomenal processing power. But it is not just somehow magically like tapping into a massive supercomputer.
So the cloud has an appeal in that it is some unquantified capacity of processing power that your compute device can offload work to, assuming the application is well tuned for it.
Another appeal the cloud has is decentralization: In theory, you can access that power regardless of what front-end device you are using – so your phone, iPad, console, PC and even fridge can all have the same application processing power behind them.
While this is most especially true of storage — the reality is that the world wide web already provides a perfectly adequate plethora of ways to achieve this. And the cloud doesn’t really offer anything above distributed web applications that are going to keep people coming back for it.
Cloud :- Great for developers, really not relevant to end consumers, although “cloud powered” might indicate that the web-apps you’re running are scalable and should have a fair amount of power behind them.
There’s also a huge WTF lurking with “the cloud”: unlike “the internet” which describes a singular globe-encompassing network of computers, “the cloud” refers to any one of many, many cloud-computing networks of computers, each unique and entirely independent. Amazon, for example, has at least 3 clouds (West, East and Europe). They can talk to each other by the internet, so you can copy stuff around between them, but something you install in “West” isn’t automatically available in “East” or “Europe”, you’d have to copy it across just like you would between two separate computers (the term “cloud” is still so misunderstood and nebulous that Amazon hasn’t been sufficiently well called out yet that this is actually not “a cloud” but “some clouds”).
If you slap a bunch of stuff up into the Amazon “cloud” and then try to access it from another cloud, well you get the idea.
I suspect that when this concept registers with the average joe fixin’ to get cloud enabled, it’ll drop like a guillotine.
If you stop and look around at the kinds of apps and plugins people tend to be going for on their desktops and mobiles, I think you’ll find the real interest in cloud computing is the concept of a personal cloud. We want to be able to transition between computing, input and output devices with ease. Why can’t my cell phone double as a remote for my TV? Why can’t my tablet double as a controller for my Xbox? Why can’t I “flick” a web page from my PC to my laptop or my tablet or my cell phone?
(I think you’ll find that the Windows 7 concept of a “homegroup” was a step in this direction :)
The idea that switching from my desktop PC to my cell phone won’t cripple the computing power available to me is fairly attractive; and if my data is in “the cloud” (whichever ‘the’ that is), then my current internet connection won’t be a big factor. But there’s the problem. It’s not even a new problem: Remember a little OS called DOS? DOS was born at a similar time. Modems were getting faster, everyone had a telephone line, and terminal’s were cheap. Why go to all the hassle of buying your own computer and software when you could dial in to a central Unix box from anywhere!
We want our stuff local and available.
Microsoft do need a cloud OS, but only for developers. And for it to really succeed, they’re going to need to resist the urge to snap up / compete with every cloud product that engenders. It’s hard to see Microsoft being able to do that.
But they are also going to need to build a “cloudy” OS for end users and businesses, who want to bring together their own devices into a cloud.
This is something Microsoft has always been absolutely terrible at. MS OSes have been some of the worst networking platforms ever: I can’t begin to imagine how many times I’ve rebooted a Windows PC because a network access went south, rendering the whole machine unusable.
And MS aren’t exactly famous for interoperability (mount a Linux file system under Windows, anyone?)
That said, I think Microsoft have the clout, the determination, and the capability to make the same kind of relevant market breakthrough that DOS made – but only if they truly and decidedly refocus on the core business of operating system development. They should sell off as many of their products as possible and concentrate on MS 2.0, if you will.