The shutdown of airspace across the UK at the end of last week raised an issue that’s been bouncing around in my head for a while: that we have long since reached a point where the systems which we have developed, and the interconnections between those systems are too complex for us to understand.
From what I have heard and read it seems that the shutdown was a failsafe mechanism that triggered as a result of data from two separate components becoming out of sync. The result of that error was the controlled landing and redirecting to flights in the south of England.
The backlash is symphony of “must never happen again”-type simplicity. But with systems as complex as air traffic control, change is not just difficult, it’s positively resisted. It’s not a world where you want to start getting all Minimum Viable Product agile, quite frankly. The implications of failing fast are too horrible for words.
But in a world of complexity, the reality is that you can’t necessarily say with absolute certainty how things will work. The failsafe mechanisms that obviously worked well last week are the answer – “designed to fail gracefully” if you will. But our public debate about computing-related issues still lives in a world where computers should be absolute, predictable things that don’t go wrong (and when they do, there must be somebody to blame).
I figure we’re all going to have to get our heads around chaotic systems and the implications that they have for us. Most importantly that when they go wrong, we should be looking for grace in those failures.