Dyson Work Light

Saw a cool-looking desk lamp at Best Buy the other day and made the mistake of letting the Dyson vendor start talking to me about it.

It’s a work light.

That costs $599.

Because (a) the arm conducts heat away from the leds, you know how leds are infamous for burning out because of heat, right, well thanks to the arm these can last YEARS until you have to replace the entire unit because it’s special magic LED that will cost you more replace than a new worklight! (b) it has super-high-tech secret sensors that allow it to match the ambient light.

Or: It costs $599 because it has a special LED, designed to get really super hot, attached to a heat-conducting metal arm with a heat-pipe in it that will conduct that heat straight into your hand when you try and adjust the lamp – which you will do regularly because you can’t tell whether the light is on or not.

Frankly “can’t tell whether the light is on or not” is the opposite of what I look for in a lighting solution. And when the guy demod it for me, it really did match the ambient light so well that we both had to tilt our heads to look directly at the bulb to be able to tell it was on.

The guy took great pleasure in showing me how you could raise or lower the arm, how it rotated, and then nearly flipped out when I started to reach for it..
Not only can you not tell whether the light is on or not, you can’t point it where you want, and people keep breaking his floor models by trying to twist or pivot it. The mechanism on the stand looks exactly like the kind of mechanism that would allow you to tilt or twist the arm. The picture on the front of the box is even taken from a bizarre angle that makes it look like the lamp has had the head tilted upwards.

Actual:

But it was standing in-front of images like these:

And what I refer to as the “light comes out here” picture

But I think this actual Dyson promo picture sums it up

“The light from my phone is reflecting off the lamp”
“I can’t quite read whats on my phone, I’ll have to lean forward for a closer look”
“This posture looks totally natural”
“It’s a touch screen, that’s why I’m not looking at it”
“I’m calling 911 to report this thing, and not taking my eye off it”

Yep – if there’s one thing you want from a work-lamp, other than light, having it directly overhead of what you want to work on, right at eye level, that’s got to be right up there… I’d throw my money at the screen but it’s too dark in here to see my money or the screen…

I did a thing

A about a month ago, I dug up an old copy of my MUD language from 1990, and took a crack at refactoring some of it in C++, but I quickly got frustrated with just how much that felt like work rather than fun, especially after my recent foray into golang.

So I have this 29+ year old code, written in original K&R ANSI C, and I want to get it, say, at least compiling.

Why not take a shot at doing it in pure C?

Analytica

Cambridge Analytica weren’t doing rocket science or hacking. They didn’t get your social security number, your phone number or your browser history.

They built a few ad-targetting criteria with such low precision and accuracy any marketing person would be ashamed of the numbers.

That imprecision played to their ends. Remember the goal was to be disruptive, so although they were using ad-targetting technology, having built-in misses was actually a boon.

What they did was:

1. Get people to fill out a survey,
2. Collected their facebook profile data with consent,
3. Collected uninteresting public profile data about their friends like public group memberships,

Then they used some machine learning systems to:

a- Build an ad-targeting profile of the survey respondents using #1,
b- Build the same ad-targeting profile using #2, but “back-train” it by correlating #1 and (a),
c- Use (b) on that uninteresting public stuff in #3.

They weren’t trying to pin down anyones’ precise political alignment or belief system, they were looking for broad strokes: Watches Colbert, Watches Hannity; Loves Guns, Loathes Guns.

City-building for VR

There are a couple of RTS type games for VR, but they’re just projections of the existing style of game, and doing this stuff by hand is clunky and annoying.

What I’d like to see is a VR-capable building game that lets you operate from the old top-down perspective for viewing the big picture, but lets you zoom on down to the in-situ level to see what the people are doing and to do things like plan/upgrade buildings, and using gestures etc to give it a cyberspace-god-like feel.

Figure: You’re standing on the gravel path watching your woodworkers heading out to chop down trees. They mumble about being hungry as they pass. Oh yeah, I could use a bakery. Circle your hand to open the blue-glowing buildings rolodex, tap your hand on the glowing bakery image, and then throw it down to roughly where you want it; use both hands to twist/position it, and then bang your fist down to make it so.

You could, of course, do all of this from the more conventional menu systems, but including the creators-eye view would just make it that much more kick ass :)

 

What changed about the ‘net?

So “What changed in the last 15 years of the Internet?”

It’s not actually the Internet we’re talking about regulating. The truth is, in the USA, we’re talking about preventing Comcast and Charter.

On paper, Comcast’s internet subscribers passed their TV subscribers in 2015.

On paper.

The weakest link

Contemporary voice recognition systems over-emphasize learning based on explicit “a=b” training; that is, there is a vital absence of false training.

I imagine a parent and child: the parent says “It’s time for …” as a peal of thunder ripples through the room. This might be used as a comedic device precisely because we would not expect the child to respond “Yes, daddy, I’ve turned on the lights in the kitchen”. I’ve yet to hear a voice system ask me what “ACHOO” means or just say “what was that?”

After a hiatus, I return to Windows speech recognition and am confused by just how far ahead it is of the technology we rely on in Siri, Alexa, Google Home, even Microsoft’s own Cortana.

For training it still relies on the old “speak these words” explicit recognition training. This is basically the same tech that shipped with Windows 7, and this comes back to my point: This approach was already not-even state-of-the-art when Windows 7 shipped.

I believe a far better approach would be to a decoupled training procedure: don’t tell the training system what the user is being asked to say. Instead, use a combination of pre-scripted phrases, common keywords, and insight into the state of the network, to decide what to ask me.

Then, ask the user to exclude options until they are down to something close enough to need individual words correcting.

There are two major gains here: 1. The user gets clear feedback on where the system is struggling to understand, 2. Instead of teaching the system that “*cough*pi” means pizza, and that “zzaplease”  means “please”, I can acknowledge the system’s ability match sounds to speech.

The problem of purely positive training is compounded by the assumption engineers make that the systems will only hear deliberate communication.

Think about this: You cough and your voice system activates. You say “I wasn’t talking to you”, and you get a witty reply.

Except: You actually just trained the system that it heard it’s activation word; it may have changed in recent months, but it was certainly true at the start of the year that all the big systems had this flaw.

Nor does being quiet help.

I think this is part of why all of the current systems have the ability to suddenly become dumber on you. Perhaps the microphone is suddenly muffled, or perhaps the subtle changes of you having a cold for a day totally reaffirmed some weak association in the engine and it’ll take you months to untrain it again so it recognizes your regular voice.

It’s my hunch this is why there is so often a clear honeymoon period with devices like Alexa, Google Home etc: you become less forgiving, the system becomes over-confident, the first thing you say gets misunderstood, and your speech pattern changes as you become annoyed, angry or bothered by the device. So instead of your normal voice being the voice it expects, your angry or shouty voice is the one it trains itself on the majority of the time.

Alexa does provide the ability to provide corrective feedback via the Alexa app, but that quickly becomes burdensome and after the first few months, largely seems to be ineffective.

Positive AND negative training are the way forward.

Cold hard cache.

Time to crawl the interwebs. I’m looking for something relatively small and lightweight, a binary blob cache that I can drop into place, import a module in python and have relatively easy access to.

The keys are likely to be large, the blobs may be several MB. I don’t care a great-deal about persistence.

What I’m looking to achieve is something like ‘distcc’ for asset conversion. The backend doesn’t need to know that, it’s just going to get semi-opaque key values that ultimately serve to compartmentalize hash spaces.

Mashinky

I played a lot of Railroad and Transport Tycoon. Some of the recent attempts to recreate the experience fall flat because they either look hideous or they are too busy delivering an ultra-realistic world/train-driving simulation.
 
On Steam, “Mashinky”  popped onto my queue and it looks like Jan Zeleny may have found a great middle ground.
There is a tycoon-like, low-rez tiled experience which he maps into a more beautifully rendered 3d mode with fancy camera options.
Steam reviews are mixed, but this is now top of my queue to try and Secret World.

CPPCon 2017

I love and hate conventions, so I don’t go to them all that often.

Although I’ve watched CPPCon videos, I hadn’t considered something you attended until this year; I wasn’t really convinced it would be worth going.

The agenda for the first few days proposed some very interesting stuff, and I decided to dip my toe.

Beware, AI…

Ever done one of those puzzles where you have to change the word “FISH” into “SOAP” one letter at a time? Imagine a more scrabble-like two-player version where each player starts from one word and they work towards the middle together.

The recent stink about Facebook shutting down some chatbots is the clickbait version of describing Facebook guys creating code that tried to do roughly the same thing, but let the dorks get carried away using words like “the machines” and “invent” and “language”.

I suspect that Facebook shut down the project because it was pointless and stupid and the coders were a little bit too whimsical.

What they did was take the task of “bartering” and reduce it to a simple numbers game; think of a sort of co-operative scrabble/fish (cards) version of the earlier puzzle where you don’t have to trade a card if it isn’t a fair trade, and the game ends the first time neither of you offers a fair trade.

You do this by drawing two hands. Each hand can be described numerically as a list of (card number and quantity). That is: jack, jack, ace, three = (card 11 * 2), (card 1 * 1), (card 3 * 1). Take the word ‘card’ out and we have (in json/python): [(11, 2), (1, 1), (3, 1)].

The Facebook guys wrote small programs that took two such lists and built a new list: the cards they want to trade. jack for queen, jack for king would be [(11, 12), (11, 13)] (jack is 11, queen 12, king 13).

These lists were sent between the programs using messenger. To do this, the programmers – not the programs – replaced the numbers with words to generate a text message they could send. At the other end, the same code mapped the words back into numbers.

So far, this is all very computationally simple, and I’m sure that there was some level of “ai research” or “machine learning” code involved, but the approach taken and the underlying task they focused on resulted in nothing special. The programs didn’t “know” anything, they just needed to succeed in choosing a number sequence that went from their first hand to their last hand without choosing numbers that were “too big” (I’m simplyfing the concept of filtering here).

The programs did not become self aware, did not know they were “communicating”, only “communicated” in so much as the line “sendMessage(‘jack queen jack king’)” as code is “communicating” (it’s a techie term, not the literal english ‘communicate’), and they most certainly did not invent a language, they simply did literally what they’d been told to do and nothing else.

Honestly: What happened is that some idiots got their project cancelled and bitched about it by describing it like an 8 year old…

“We wanted the other machine to trade our machine a jack for a queen, but instead of developing the ability to speak english and saying ‘Trade you a jack for a queen’ via a speaker box, it was really spooky… our machine said ‘jack queen’, and the other machine – the one with the red eyes and the laser beams – it said ‘queen jack’. Holy shit! Sure, we wrote code to print “something something” but … it was doing it. All on its own, when we clicked Run.

“Obviously it didn’t say that, it just printed 10 11 and 11 12, but when we ran the program that converted the numbers into text and sent them to messenger, you could see it right there, on facebook! In text! ‘jack queen’ and ‘queen jack’. The machines were talking to each other! It was, like, they had invented their own language.

“First time round, we couldn’t get the other computer to receive the messages, we had to copy and paste them into a program to convert text into numbers on the other machine, but when we did that, when we converted the text into numbers, and ran our program, it printed out some more numbers. It was like the machine understood what was being said to it. Totally freaky.”

TL:DR; There was definitely some “artificial” intelligence behind the story

Mr #4 if you read this – someone needs to be “transferred to the Feed-PE team”.