A fascinating evening last night, at the invite of Mark Smith at LexisNexus, I was given the opportunity to speak with a group of law firm knowledge managers and, as is my style these days, get them to play with Lego for a bit.
As the conversation evolved over dinner, one of the participants said about how he didn’t really understand the nature of machine learning. It led to an interesting conversation…
“Learning” is a term that has a multiplicity of meanings these days. In an organisational context, the “Learning & Development” function is a provider of “learning” which often actually takes the form of mere information (you know – the compliance stuff that employees are made to sit through to ensure their employer can tick some boxes about health and safety or money laundering or laundering money safely or what-have-you).
Learning in a “Machine learning” is more about algorithms that adapt based on the data that they process – as this rather nifty TechTarget definition puts it:
Machine learning is a type of artificial intelligence (AI) that provides computers with the ability to learn without being explicitly programmed. Machine learning focuses on the development of computer programs that can change when exposed to new data.
Now that’s clearly very useful, and has given us (and continues to give us) lots of wonderful things. But in comparison to the human act of learning it’s all a bit, well, mechanical.
What is this mythic human learning of which I speak? Well the model that I’ve found most useful over the years is that that comes from David Kolb’s learning styles approach. There’s some controversy about learning styles at the moment, but the cycle of learning process that comes from it seems to me to be a useful framework. We do something (act), reflect upon what we have done, theorize about how we could do it differently/better and then plan how we might do so, leading again to action, and so it cycles.
Those middle stages require cognition. They require us to draw on our sum of experiences, not just the specific task at hand. They are in many ways what makes us human. Too often in working life the pressures to deliver or to perform means that we don’t get the chance to do any of those things.
A machine learning algorithm, meanwhile, is doing none of those cognitive acts. It’s all action, with variation in what is done happening as the result of a closed-loop statistical analysis. It’s not “learning” any more than it is “thinking”. It’s just re-calibrating, adjusting. That’s powerful stuff. But learning? No more than a memory foam mattress has memory.
The anthropomorphic metaphors that are in use to describe many of the technologies that are at peak hype curve at the moment are very powerful, and also very misleading. For many decades artificial intelligence was a quest for cognition in machines. Today the challenges of AI have been “solved” not through cognition or understanding, but through big data and statistics.
These tools are amazingly powerful. But without understanding, without making the leap from correlation to causality. Without actual learning. Well, without those things we’ll continue to have the upper hand unless we decide to become subservient to the algorithms. That decision, though, might not be in all of our hands.