There was a great bit of stage show at the MIX 2011 event in Las Vegas last week as the launch of the Kinect SDK was demoed (see it here http://channel9.msdn.com/Blogs/C9Team/Kinect-Demos-with-the-Channel-9-team. For those of you who don’t know, Kinect is a USB device that enables the Xbox games platform (or, when the SDK ships, PCs) to be controlled by people’s movement. It’s a series of clever cameras that enable people’s movements to be converted into system control. It’s also the fastest-selling electronic gadget of all time (http://www.wired.co.uk/news/archive/2011-03/10/kinect-fastest-selling-device).
We are now in a world where we have incredibly advanced computing devices and yet still our basic methods for interoperating with them are constrained by the QWERTY keyboard (invented 1873), the mouse (1963) and even the “modern” touchscreen (again, the mid-1960s). The Kinect (and the success of the Wii before it) are starting to see new models for interaction with games systems, but we do seem to be stuck with a bunch of interaction devices that are in their 40s or above for the devices that tie us to our desks.
Speech recognition is one area that seems to have been bubbling under for many years, but still hasn’t really taken off. There are a few factors at play here, not least that many of us feel daft talking to a machine, and that a modern open-plan office filled with people talking to their machines would be a hellish place indeed.
Interestingly, though, although I personally feel very silly talking to a machine when it is the machine that I am talking to, I don’t have such calms with Voicemail, presumably because my subconscious is happier when the intended recipient of my ramblings is another human being. Here, I think, is the key to where interface design will go… because, quite simply, IT is starting to slip into the background as it becomes “merely” a conduit for conversations between people. Witness the dramatic rise of social networks in recent years…