The recent hubbub about Google’s Go-Playing Artificial intelligence beating one of the world’s finest human players got me thinking myself.
Firstly, one of the interesting reflections on the tournament was that the machine was playing moves that a human just wouldn’t have thought appropriate (or even thought of). What this seems to belie is that humans use patterns to achieve things in a way that a calculating machine doesn’t. Or maybe just an outsider doesn’t. AlphaGo’s performance in a way reminded me of world-renowned competitive eater Takeru Kobayashi – a man who destroyed the then world records for eating hotdogs through the insight that you didn’t need to eat the sausage in the bun (Freakonomics radio did a whole episode on Kobayashi which is worth a listen).
Is it actually the absence of some sorts of intelligence that makes this AI so powerful in the very constrained realm of the game of go?
Secondly how AlphaGo is yet again a (undoubtedly very impressive) example of an AI being used to focus on a constrained system. The rules of Go are absolute. The permutations of moves may be enormous, but they are boundaried. This seems to be the realm in which AI works best – a world of complexity but possibly not chaos?
Could an AI invent the game of Go?
Could an AI make Go the popular game that it is today?
Could an AI get enjoyment out of playing Go?
The first of these intelligences is that of creativity. I hear quite a lot of talk about how that is the bastion in which humanity will remain superior. Given the quality of so much that goes under the banner of “creative” these days, in our risk-averse, “hack it not invent it” times, I’m not so sure.
The second of these is about communal intelligence. The phenomenon of Go is a cultural thing that has emerged from the actions and intelligence and beliefs of many, many people. That’s where I think we humans will always maintain dominance.
The final one of these is about emotional intelligence. I don’t think machines will ever have emotions – but they might get pretty good at pretending they have them (especially as we are so gullible to anything that looks even vaguely human).
Which leads me overall to wonder – actually is what we are looking at in Artificial Intelligence actually better described as Simulated Intelligence? Crunching vast pools of data, whether to recognise handwriting or speech, or to translate between languages or to play games seems to be the modus operandi of AI these days whereas in the past, in processing and data poorer times, programmatically creating “intelligence” was the order of the day. The outcome might be the same, but it’s simulation not artificial creation…