Last week I had the pleasure of catching up with a former colleague G. He and I, some twenty one years ago, were involved in a project that delivered the very first Data Warehouse into the BBC. It was a project that had oversight from John Birt, the then Director General, and a manager with a fearsome reputation as a bean counter.
The project was big. Around a million pounds of investment if I remember correctly, what with the hardware and the software and the people and the training. I might be mis-remembering the numbers. But it was certainly of that sort of order.
I’d got in touch with G to find out what he knew of the world of data consulting these days, as I find myself on the precipice of a similar sort of project in the year ahead.
He wasn’t aware of much in the domain these days – he’s been far more of a generalist for much of the time since we worked together. But his next comment has got me thinking.
“We’ve pretty much had a decade of crap IT.”
Now the rationale behind this is the bit that is really interesting. Back in the days when a data warehouse would cost you a million quid, there was a lot of rigour associated with such initiatives. Pretty much any IT project in an organisation of any sort of scale was £100,000 upwards, and that’s because you had to start with a hardware, and then license operating systems and database servers and application servers and then buy or build some software to sit on top of it all, and then run the thing, train users and do change management (the people and technological varieties).
You’d then have 20% or so support costs year in year out and everyone remembered the stat that the total cost of ownership of an IT thing was about 1/7th of the cost of buying it in the first place. So you’d have architectural standards and data standards and put some effort in. And you’d do some pretty heavy duty project management too.
And then came The Cloud.
The Cloud didn’t immediately break this (and one could argue strongly that much of the systems work that came out of projects back then was mildly disappointing if you were lucky). But since the first emergence of Cloud-like models at the turn of the century, to it’s widespread adoption in the mid 2010s, something interesting has happened, at least according to G.
Interpreting a little about what he said, the cost of doing architecture and project management and training and change management was always a factor of the total cost of the overall project, which was significantly bumped up by the costs of hardware and software. Of your £1m, 10% would go on change, and 20% would go on project management and so on.
But when the cost of “launching” a new technology product becomes so low that it’s almost or entirely “free” (Slack, Teams and many others)… well, then the stuff that you used to pay by percentage of the total cost magically becomes free too. Except it doesn’t. Because the cost of change management associated with introducing a new service isn’t merely a function of the total cost of your hardware and software. And neither is the cost of good architectural management.
And so it doesn’t happen. But the need is there just as it always was because “an intuitive User Interface” doesn’t magically align business processes to technology systems and “Open APIs” don’t mean that service magically interface with one another. Which is what G meant by a decade of “crap IT”.
I’m not sure I entirely buy into this, but there is a pretty good case to be had. Back in 2015 when I was working on the Who Shares Wins paper for Leading Edge Forum, a side note of interest was that organisations who had implemented Jabber seemed to get more out of it than those who had just gone with whatever collaboration tools came with the rest of their “productivity suite”.
I speculated at the time that this might be because of Jabber being so very expensive, and so therefore it being given the change support it required to be successfully adopted…