Computers are very good at following rules. It’s kind of what they do.

If you look at the recent landmark advances in machine intelligence, they are generally in one of two camps: using artificial intelligence to excel in a rules-boundaried domain (playing Chess, Go, Jeopardy, Poker, the markets and so on), or alternatively to use masses of data to construct rules in a way that humans would find very difficult or next to impossible.

If you believe that everything can be determined by a set of rules, then this is great. Bigger, faster, stronger and so on.

But what if that’s not the case? The scientific method is one by which rules to define and predict the universe are good for as long as they are not proved otherwise, and then they are changed in often dramatic fashion: Newtonian to relativity physics; from Ptolemaic models of the universe to that of Galileo.

These step changes in rules show us two things: first of all models to describe the universe, even if very useful, might not be right; and secondly that significant advances in science don’t just come from gradual adaptation – sometimes science is very iconoclastic.

Can we create iconoclasm in software? Who knows. But I fear that we are entering a period where “The algorithm told me to do it” will become more and more common, and that those algorithms whilst very useful will run the risk of catastrophically breaking, either in isolation (take, for example, the Google Flu Trends model) or in interaction with other systems (the 2010 Flash Crash). As the technologies become ever more complex, the safeguards against such risks become ever harder as fewer and fewer people can understand the nature of the implications of lots of rules-based systems. With Machine Learning, how do mere humans challenge the superiority of the system?

Can everything be described in terms of rules? Well, maybe. But the complexity of doing so might just not be possible. Try writing the rules for a common task like riding a bicycle. Writing them to the extent to which another human could immediately do it, let alone a machine.

Can complex human systems be described in terms of rules? Well, as the industrial action known as “Work to Rule” shows, much of what goes on in organisations to actually allow them to function happens outside of the rules. Around the edges. Informally. At a human level.

As new waves of technologies promise more “prediction” of the future, more knowing of the unknowable, more magic and pixie dust, we run risk of either continuing the long history of over-investment of ineffective technology (I don’t see any AI or machine learning involved in the investment decisions being made into new technologies), or creating massively more complicated and complex socio-technical systems that become ever more fragile.

I see that my LEF colleague Dave Aron is becoming increasingly interested in Taleb’s concept of Antifragility as part of his research agenda.  This is a good thing. I’d also propose from the outset that the greatest resource when it comes to antifragile organisations are its people, not its systems. We are quite good at working out what shit needs to get done when the shit that was supposed to do it goes to shit.

(That’s my 2017 quota of the “s” word used up then…)

One thought on “Following rules

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.