A new week, a new book. This week it’s Steven Johnson’s Wonderland: How play made the modern world. Johnson’s thesis is that much technological innovation attend from the pursuit of happiness and distraction rather than from hard-headed economic need.
In one of the early chapters he charts how the work of automata manufacturer Jacques dear Vaucanson was to influence a young Charles Babbage, inspiring his work on the mechanical computers of his later life.
The 1700s saw the creation of amazing mechanical and clockwork devices that recreated everything from the actions of humans to the activities of members of the animal kingdom. The creation that inspired the young Babbage was Vaucanson’s Digesting Duck.
This huge contraption was able to both eat grain and poop it out of the other end. As you can see from the photo, the mechanisation of such a biological process took a fair heft of hardware. Presumably the mechanisms we kept hidden so as to not spoil the illusion.
Whilst Vaucanson’s creation gave the illusion of digestion, and the equivalent outputs to the appropriate inputs, it was nothing but a simulation. It wasn’t actual digestion.
Reading this has reminded me of two experiences last week – the first a rather ropy demonstration of an “Artificial Intelligence” system, the second a fairly exhaustive take-down of the current wave of “cognitive” computing from Roger Schank, someone who has spent his career in the field of computer intelligence.
Schank’s argument is that whilst the media eulogise about the current advances in machine intelligence, they are being hoodwinked. We aren’t in a golden age of machine intelligence – we are at a point where if you throw enough raw computing power and data at a problem, you can (as long as the problem is reasonably well-boundaried and reasonably logical) create a reasonable simulation of intelligence. We aren’t creating artificial intelligence or cognitive computing – we are building Thinking Ducks.
The complexity around all of this from a psychological perspective is really fascinating:
- we seem hardwired to want to find machines or people who can predict the future, therefore removing the stress and strain of having ambiguity in our lives;
- we are complete suckers for anthropomorphism – if something looks a bit human, we project humanity upon it;
- there’s some really interesting research about how humans (and men in particular) are better able to open up emotionally to a machine that looks like a human (but is known to be a machine);
That last point is the one that is really lost on many of the data science types who are creating these Thinking Ducks, but might be where the most value lies going forward. That we might see some really interesting things emerging not because the machines are intelligent, but because they act just enough like us to be able to help us use technology more effectively. To help us be more effective.
That wouldn’t be an exercise in precision or accuracy – anything but. It would be an exercise in creating systems that can make mistakes, not be all-knowing and show humility. Not so much that they become totally human, but enough that we can relate to them. Just imagine – the rise of the self-deprecating machine…