I’m currently reading Nassim Nicholas Taleb’s Antifragile. It’s not the easiest of reads (a prequel written after a sequel starts on a bad footing – just ask George Lucas) but a fascinating concept – that the opposite of “fragile” isn’t “robust” or somesuch (which are just neutral concepts) but rather “antifragile” where things get stronger from being kicked around a bit.
What it’s making me realise is that much of what is happening with systems and technology and the world of digital is making the world incredibly fragile, despite it being billed as making things less risky.
Take, for example, recent large scale systems failures at RBS (see a fascinating take on that here: http://coppolacomment.blogspot.co.uk/2013/03/the-legacy-systems-problem.html) the UK Air Traffic Control service (http://www.theguardian.com/uk-news/2013/dec/07/flights-grounded-swanick-computer-glitch) and possibly the impending disaster of electricity and gas “smart” metering (this is essential reading: http://www.nickhunn.com/smart-metering-is-fcuked/).
Within all of these examples there is a technology failure narrative (“computer said no!”), but underpinning are combinations of human decision making, politics and ultimately an almost blind belief in how technology will prevail.
Our willingness to place such trust in such complex systems is a mystery to me. It’s not that there is anything wrong with technology – it’s just that when it meets people the outcomes are never quite what the unambiguous world of engineering might predict. It’s not risk free, it goes wrong, and there is a binary nature to so much of this today that when it does go wrong it takes everything with it (for a period).
This is something that is a feature, it seems, of digital systems. Think of the difference between analogue and digital TV (this is a metaphor): lose your analogue TV signal, and the screen gradually gets fuzzier and fuzzier. Lose your digital TV signal and you lose your high definition, surround sound experience in a loud and unpleasant instant.
The pre-digital world seemed somehow more fault tolerant, more able to adapt to small failures. The digital world by comparison has incredible advantages, yet is brittle. Partly this is a technological design challenge (avoiding single points of failure, reducing technical complexity and so on), but it strikes me also as being a human design challenge. We shouldn’t get suckered into putting things into “process” which should remain in our control as humans, and making sure that we retain understanding of the ways in which systems are operating and the ways in which they are advising us to act.
Reblogged this on kwalitisme.