MS and the cloud.

So you’ve seen the “To the cloud!” adverts, and I tried to explain yesterday what that’s about.

Since missing the initial Internet Bandwagon many years ago, Microsoft has been rushing to catch up. As they were closing the gap, the mobile device began to flourish as did mobile apps like Google Docs.

This tossed a new tack onto the tarmac, and persuaded Microsoft to shift into catchup overdrive, and they seem to have really leaped aboard the early run of The Cloud bus…

CPU, RAM and disk manufacturers have really failed to keep up with demand in the last few years.

After it became clear that the Itanic was going to sink and Intel realized they were about to enter a “nothing new here” vacuum, they chose to flip us the multicore birdie: x86 compatible multi-core CPUs are somewhat smoke-and-mirrors.

Our desktop multi-core CPUs have been a travesty. You think this is what you’re getting:

Second CPU – that’s gonna double your processing power, right? But no, because the cores are just more of the same, so what you’re actually getting is

Itanium scared Intel off of developing a new chip with new machine code instructions, so the Core series CPUs are akin to upgrades of the old Pentium in so much as they are essentially the same instruction set … in particular, one lacking any kind of parallel-processing features.

By contrast, when Intel brought us the Pentium, one of the big features was that it used multiple processing “pipelines” to let it perform on-the-fly parallelization of code a few instructions at a time.

Which meant that you could take an off-the-shelf piece of software and … it just magically went faster.

But Core CPUs don’t do anything of the sort, and despite continued money funneled into Itanium it’s going less than nowhere. Unless the program has been carefully crafted in a very specific fashion by the developers, it will go as fast – or slower – than it went on a single core CPU :(

CPUs haven’t picked up any speed now in nearly 6 years:

(Note: those are CPU release speeds, not maximum available speed; Intel pushed as far as 3.8Ghz in early 2005 before nose-diving with slower-clocked multi-core CPUs)

By contrast, computer software has continued to get more sophisticated, more complex, and more resource demanding: Developers having been counting on that continued CPU, memory and disk increase. This disparity between supply and demand hasn’t gone unnoticed.

When Core 2 CPUs first slipped onto the scene, I think most developers saw them as an anomaly, for specialist users doing things like video rendering, etc. I seem to recall a very strong prevailing belief that this was a short term gimmick or a blip to cover the CPU vendors’ asses while they cranked out the next speed upgrade.

The transition to multi-core awareness is still on-going, and the transition to effective and efficient multi-core programming is pretty much in it’s infancy for more established developers (read: old dogs). Heck, the majority of traditional C/C++/Java/etc programmers still don’t realize that calls to malloc(), calloc(), in any kind of threaded process impose hidden mutex overheads, making your application less thread-friendly.

It’s also not an easy transition to make, which serves as an incentive for those kinds of developers to consider migrating to the simpler world of mobile development – devices which generally run on single cores or CPUs that are still in the “pipelining” era of doing the work for you at runtime – rather than trying to figure out how to multi-thread their application.

It also makes the cloud seem more appealing, but only because people are still getting to grips with what the cloud is. Well-written cloud applications can deliver phenomenal processing power. But it is not just somehow magically like tapping into a massive supercomputer.

So the cloud has an appeal in that it is some unquantified capacity of processing power that your compute device can offload work to, assuming the application is well tuned for it.

Another appeal the cloud has is decentralization: In theory, you can access that power regardless of what front-end device you are using – so your phone, iPad, console, PC and even fridge can all have the same application processing power behind them.

While this is most especially true of storage — the reality is that the world wide web already provides a perfectly adequate plethora of ways to achieve this. And the cloud doesn’t really offer anything above distributed web applications that are going to keep people coming back for it.

Cloud :- Great for developers, really not relevant to end consumers, although “cloud powered” might indicate that the web-apps you’re running are scalable and should have a fair amount of power behind them.

There’s also a huge WTF lurking with “the cloud”: unlike “the internet” which describes a singular globe-encompassing network of computers, “the cloud” refers to any one of many, many cloud-computing networks of computers, each unique and entirely independent. Amazon, for example, has at least 3 clouds (West, East and Europe). They can talk to each other by the internet, so you can copy stuff around between them, but something you install in “West” isn’t automatically available in “East” or “Europe”, you’d have to copy it across just like you would between two separate computers (the term “cloud” is still so misunderstood and nebulous that Amazon hasn’t been sufficiently well called out yet that this is actually not “a cloud” but “some clouds”).

If you slap a bunch of stuff up into the Amazon “cloud” and then try to access it from another cloud, well you get the idea.

I suspect that when this concept registers with the average joe fixin’ to get cloud enabled, it’ll drop like a guillotine.

If you stop and look around at the kinds of apps and plugins people tend to be going for on their desktops and mobiles, I think you’ll find the real interest in cloud computing is the concept of a personal cloud. We want to be able to transition between computing, input and output devices with ease. Why can’t my cell phone double as a remote for my TV? Why can’t my tablet double as a controller for my Xbox? Why can’t I “flick” a web page from my PC to my laptop or my tablet or my cell phone?

(I think you’ll find that the Windows 7 concept of a “homegroup” was a step in this direction :)

The idea that switching from my desktop PC to my cell phone won’t cripple the computing power available to me is fairly attractive; and if my data is in “the cloud” (whichever ‘the’ that is), then my current internet connection won’t be a big factor. But there’s the problem. It’s not even a new problem: Remember a little OS called DOS? DOS was born at a similar time. Modems were getting faster, everyone had a telephone line, and terminal’s were cheap. Why go to all the hassle of buying your own computer and software when you could dial in to a central Unix box from anywhere!

We want our stuff local and available.

Microsoft do need a cloud OS, but only for developers. And for it to really succeed, they’re going to need to resist the urge to snap up / compete with every cloud product that engenders. It’s hard to see Microsoft being able to do that.

But they are also going to need to build a “cloudy” OS for end users and businesses, who want to bring together their own devices into a cloud.

This is something Microsoft has always been absolutely terrible at. MS OSes have been some of the worst networking platforms ever: I can’t begin to imagine how many times I’ve rebooted a Windows PC because a network access went south, rendering the whole machine unusable.

And MS aren’t exactly famous for interoperability (mount a Linux file system under Windows, anyone?)

That said, I think Microsoft have the clout, the determination, and the capability to make the same kind of relevant market breakthrough that DOS made – but only if they truly and decidedly refocus on the core business of operating system development. They should sell off as many of their products as possible and concentrate on MS 2.0, if you will.

 

4 Comments

you’re using clock frequency as the end all metric for performance which just isn’t accurate. the first core2s were faster than pentium 4s despite being lower frequency as were the athlons of the same time period. saying CPUs have failed to improve single threaded performance isn’t accurate at all is what i’m saying.

also your basing you belief in microsoft’s ability to innovate on DOS? They didn’t even invent DOS they licensed it.

“using clock frequency as the end all metric”, no, I’m simply using it as a metric. I don’t see anything indicating this is a PHD thesis or encyclopedic description of everything, but lots of clues that I’m trying to be relatively laymany.

That said, the improvements to single threaded performance 2006-2011 don’t remotely match the kinds of improvements we were seeing 1993-2006.

Factor in the relatively slow increases in RAM speeds and very slow growth of disk capacity and speed (relative to the quantities of data etc we are working with today) and what you have is a net, overall, performance degradation.

And while clock speed might not be a great method for comparison, that chart goes back over enough iterations of processors to remain enough meaningfulness, specifically that the emphasis is now on cores rather than speed, and as long as the x86-based chips lack parallel instructions, that means performance is going to “coming soon”.

“you belief in microsoft’s ability to innovate on DOS”

Ah, yeah, I can see how you came to that interpretation. It was “relevant market breakthrough that DOS made”, wasn’t it? No, that doesn’t suggest any kind of belief in innovation, just some business smarts. So it must have been “Remember a little OS called DOS? DOS was born at a similar time” Yeah! That was it. Birth = innovation. Absolutely, you got me.

Hrm. I only mention DOS those two times. Was it “MS OSes have been some of the worst networking platforms ever”? Wait. Was it “It’s hard to see Microsoft being able to do that.”? “This is something Microsoft has always been absolutely terrible at”?

Zero points for comprehension, my friend.

so the cpu industry is total fail because in 5 years they couldn’t replicate the speed increases from the previous 13 years? thats fair. you could argue moores law hasn’t held and i would agree with you. transistor counts have not doubled on a single core basis. sure the entire socket has doubled, or more, though i doubt mister moore was thinking about multi cores but w/e.

we’re also right in the middle of a mass storage renaissance. sure you can’t get 2tb of SSD but you can get a system drive and a ‘games’ drive relatively cheaply. if you really want to get crazy you can get SSD storage that is as fast as RAM. it isn’t cheap but the option is there.

ill give you the point on memory, QDR should have been on the market by now. i’ll pass blame on this to rambus who sues anyone who invents anything new because apparently they thought of it all already. there have been improvements though in interconnect and memory speeds albeit relatively small.

however your angst while somewhat accurate is aimed at the wrong group. if ata100 wasnt fast enough, you could buy scsi or you could run RAID, or both (i did). but ata100 was and still is enough for most people. XP is still the most widely used OS and likely it resides on old hardware. consumers aren’t buying new desktops like they used to. they’re buying laptops which are inherently slower and mobile devices which are even slower. however all those mobile devices have fueled the SSD market. As for slow growth … a 2tb drive for 100 bucks isnt enough for you? honestly, how many 2tb drives do you own and how many are full? I have a cobbled together JBOD NAS with less than 2tb and half the crap on it i’ll never watch again and all the music is lossless and I still have a few hundred gigs free. if anything storage has out paced demand by a great deal. unless you’re youtube or facebook but they’re already talking about putting an age on large files and deleting them.

The power is there but consumers are not buying it they’re buying things less powerful but more convenient. Then when they need the desktop to do w/e they go back to their 5 year old computer running XP and do whatever they need to do because its all they need.

All i’m saying is chastising the big OEMs for not innovating at a break neck pace even though they are still innovating and then not comparing what the consumers are actually buying isn’t really telling the whole story.

Trackbacks and Pingbacks

[…] This post was mentioned on Twitter by The Cloud Network , Richard Karlin. Richard Karlin said: MS and the cloud. « kfsone's pittance: This tossed a new tack onto the tarmac, and persuaded Microsoft to shift … http://bit.ly/eb1DVy […]

Leave a Reply

Name and email address are required. Your email address will not be published.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

You may use these HTML tags and attributes:

<a href="" title="" rel=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <pre> <q cite=""> <s> <strike> <strong> 

%d bloggers like this: