In conversation yesterday I realised that I’ve developed an occasional habit of defining alternatives to the Turing Test as ways of understanding quite how far away Artificial Intelligence really is. Here’s the compilation…
An observation that humans are getting quite good at being able to parse complete gibberish that is the result of “AI” autocorrect on phones substituting the wrong words. The Internet of Malapropisms.
Similarly, spell check and autocorrect falls apart whenyoustartleavingoutthespaces.
AI that could do sarcasm, well. Yeah, right…
Another humour thing – apparently puns are found in most languages.
As I frequently remind people, Adam Grant’s book The Originals describes procrastination as a habit of original thinkers. I’m looking forward to procrastinating AIs.
Yesterday’s tweet that made me realise this habit…
Emotions. Any sign of emotions yet? Not just emulating them. Actually feeling them…
And finally…
Hi Matt,
Are you aware of the Winograd Schemas? Your alternatives all aim to evaluate emotional capability of AI, but we are still struggling to get reasoning right. Hence this “new” test. Here’s a link to the Wikipedia article about it and a recent report on the Winograd Schema Challenge, which takes place once every two years. The latter will show you just how early days it is:
http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/winograd-schema-challenge-results-ai-common-sense-still-a-problem-for-now
and
http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/winograd-schema-challenge-results-ai-common-sense-still-a-problem-for-now
Have great weekend!
Wim
Sorry.. wrong entry for the first.
https://en.wikipedia.org/wiki/Winograd_Schema_Challenge