How often have you been playing a game and come across something “weird” that clearly should never happen: like finding a spot where the AI can’t “see” you while you wail on them mercilessly, or in the process of button mashing you catch a trigger that kills the boss who had you down to 1 hp without losing any of his own…
You scratch your head and ask: Why didn’t they think of that?
Chances are … someone did :) That someone just had reason to figure it would never happen.
Someone once told me that programming is like writing out a series of steps for a robot: Go to kitchen, pick up coffee cup, go to coffee machine, pick up coffee pot, pour coffee into cup, take coffee to me.
Sadly, programming isn’t chirpy fun like that. Infact, you have to be a bit of a pessimist.
Lets start with “go to kitchen”. What if the robot is already in the kitchen? Should it leave? Ok. So you mean’t, “if not already in kitchen then go to kitchen”.
“pick up coffee cup”. Ok – but what if the coffee cup isn’t in the kitchen? What if there is more than one coffee cup?
But lets get down to the juiciest of these steps: “pour coffee into cup”.
What if the coffee is stale? What if there is no coffee in the pot? What if the cup was in the sink when the robot picked it up and it is full of dishwater. What if it hasn’t been in the sink and it was full of mould?
If we add steps to clean the cup, how far do we go? What if someone had been using the cup to hold highly toxic chemicals in it, and cleaning it out just makes it worse, last thing you want is for the robot to bring you a cup of innocent looking but bowel-melting killer coffee.
Our robot needs us to tell every decision that needs to be evaluated. It’s not going to stop automatically for us because the coffee is way colder than we were anticipating.
And that means we have to make decisions about what constraints we want to test.
For instance, what about the temperature of the coffee in the pot?
We probably want to check if it is too hot or too cold. So we choose some temperatures.
Is coffee cold? Warm it up.
Is coffee boiling? Cool it down.
But what if some jerk has come along and put superheated plasma in our coffee cup? If the robot tries to cool it down by putting it in the fridge… Uhm. Bad things may happen.
And what happens if some idiot was cooling quantum bits in the coffee cup so it is just a few degrees above absolute zero. Picking it up might damage the robot, but assuming it survives that, what if our reheat method involves putting it in the microwave for a few seconds… KABLOOM.
You have to make decisions on what’s likely. So you have to actually think about these things. Sure, not checking for plasma in a coffee cup is likely to be pure omission: I bet a few of the programmers amongst you had thought, “and what about whether the coffee is hot or cold?” but I doubt many of you had thought “and what if the coffee could melt the sun?”
Lets say we’ve now decided that we are going to go out on a limb for the temperature check and add in rules that could cope with a robot to be used on the set of Eureka.
Our “pour coffee” code now has sanity checks. But what happens in the event of these checks proving false?
If we’re going to add “super-heated” and “super-cooled” constraints to our “pour coffee” rules, we need to handle what happens where the coffee isn’t poured due to them.
Remember, the robot picked up the coffee pot. If our pouring rules are going to check for an insanely hot temperature, we probably need to protect the robot.
“pick up coffee pot. pour coffee into cup. If we couldn’t pour the coffee into the cup because the coffee was at 10,000,000 degrees, replace pot and get the hell out of there”.
In a really good piece of programming, you will build your steps modularly. Instead of one giant description of how to pour the coffee, you’ll break it down into smaller and smaller steps.
When you are under a deadline and you are having to cobble heaps of extra complexity into a system, it’s incredibly easy to just slip that extra line into that routine here, instead of having to make the next layer down a whole order of magnitude more complex, affecting everything that relies on it.
At some point, you just have to decide “that can never happen”. And so, you happily ship a robot that can deal with plasma in a cup, won’t put a cup of liquid-nitrogen cooled brownian motion into a microwave … but it will, unfortunately, gladly serve you steaming hot coffee in a gamma-ray emitting Chernobyl souvenir.