We’ve got a relatively new, Samsung LED TV in our living room. It is covered with toddler fingerprints. We can’t find anything that seems to remove the smudges and the marks, and it somewhat distracts from the beautiful, bright, HD image.
One of two things, though, is going on to cause this greasy destruction: either our kids have seen us interacting so much now with our touch screen phones that they assume that all screens are touch-enabled; or our kids innately expect that a screen displaying moving images should be something with which they can interact. Either way it has fairly profound implications for how my children will expect to interact with technology as they get older, let alone on our home cleaning regime.
The dramatically fast move to what we at Microsoft refer to as Natural User Interfaces (touch, gesture (think Kinect) and speech) seems to have happened pretty much overnight: it’s certainly the case that touchscreens have been around for decades, as has speech recognition, but it’s only in the last five years that a combination of improved technology (particularly capacitive, multi-touch screens, low power consumption processors and improved batteries) and new user interface principals have brought into the mainstream devices that aren’t controlled by some form of abstract device (a numeric or QWERTY keyboard, mouse or pointing device). The way that my 15-month old can navigate and use applications on a smart phone is way above what he can achieve with the unnatural interface of the mouse.
The idea that when my two enter the employment market in maybe 20 years time that they will be being forced to use such unnatural interfaces as those that we still predominantly use today seems faintly ludicrous. But there again, the QWERTY keyboard is still around, 140 years after it was invented to solve a series of mechanical challenges on typewriters, and as I have written here a few times before, the technologies of the past have a remarkable ability to control the technologies of the future…