Back in the early days of my career, deciding to do something new with information technology was an expensive business. Before you did anything, you needed hardware; servers to run things on, and software to run on those servers. The things you required arrived in boxes, even the software in the form back then of dozens of floppy disks (or “The Save Icon” as my children know them).
As a result, the way in which technology spending was managed was geared around capital investment. Even if it depreciated from the moment that you opened the box, those servers and that software had some sort of book value. It was an asset in the same way in which a machine on a production line was an asset. It could be sold on.
Capital investment in organisations is managed through the wizardry and witchcraft of depreciation. Because the asset has a book value, even though you pay for it upfront, you can stagger the impact of that cost over a number of years.
Because of the need for investment up front, there was then a tendency for people to try to commit to spend as much as possible upfront. There’s a strange dynamic in most organisations that means that for the effort involved, there is a better return if you raise a business case to invest a million pounds than if you raise one to invest one hundred thousand. If you’ve got to go through the pain of writing and getting approval for the thing, you might as well go for as much as you can so you don’t have to go through the pain again.
This was all fine. Except too often the projects failed. And they failed for many reasons, but often partly because too many assumptions were made up front and a product was delivered that failed to meet the needs of the people who were intended to use it.
Since the late 1990s, two phenomenon have changed all of this: the rise of agile methods and the rise of cloud computing.
Agile methods, which are approaches to developing software that assumes that you can’t make assumptions about what you will be building until you start building software and putting it in front of people, created methods for building software that were unique to the medium.
Cloud computing, making processing power available through platforms delivered over the internet, broke the need for organisations to buy lots of expensive hardware and software before they could actually do anything. Today it is possible to build and deploy a system on a global basis with no up-front capital investment needed at all.
What these two trends have also done is shatter the idea that software is something that can be purchased on a one-off basis and then just is left in perpetuity to run. There is some software like that, but in reality, the decision to create a new software system is a decision to commit to a lifetime of costs to feed and water that software. Why? Because the world around that software continually changes, the software needs to change to react appropriately.
But organisations still struggle, after two decades of these trends, to break away from the “big capital investment up front” model of financial control that sits around information technology. Why? There are I’m sure many reasons, but at the core, I think there are some systemic things in finance that significantly hamper change, and there is a lack of understanding of finance within IT management and vice versa.
The systemic things are very deep-rooted. For example, many commercial organisations use an accounting practice called EBITDA to report on their profitability. EBITDA is a well-established way to make your profits look better: it stands for Earnings (ie profit) Before Interest (payments on loans) Tax Depreciation and Amortisation. Move things from Capital investment to Operating costs and even though you are spending the same money you negatively hit your business’s reported profits.
I don’t know how that stuff changes, other than through addressing the understanding gaps, and that will take time.
We have used “Utilities” as a metaphor to describe IT in the Cloud era, but it’s really not like that very much. While pure computing power or data storage might be like consuming electricity, there is much more going on in IT that adds value.
But how about if we think of IT spending like transportation spending? A company might post stuff using the conventional postal service or couriers. People are transported by trains, planes and cars. Goods are transported by fleets of lorries. Sometimes specialist hauliers are brought in to transport complex or unusual loads.
But unless an organisation decides to buy a fleet of lorries or even a private jet (who’d be mad enough to do that?) none of these services would be considered a capital investment. And even if you did buy some lorries or jets, you’d know that the lifetime costs of those things would sit in staff and fuel and servicing and insurance, not just the upfront price.
A few years ago I spent a bit of time with a CIO at a company that sold bit of rock of one form or another, usually into the construction industry. Rock is a product that is almost all about the cost of transportation. They owned quarries, but the value they generated from them came from taking the rock and putting it in other places. And all of that transportation was operating cost, not capital investment. Maybe we should start thinking about data as rock, and IT as transportation services?
There are undoubtedly still cases where there is a need for capital investment in IT, and there is also a case that the operating cost/subscription fees model for cloud computing consumption is far more advantageous for cloud technology companies than it is for their customers. But capital investment thinking forces all sorts of strange behaviours that cause projects to fail, and can be a significant barrier to small-scale experimentation and iteration that can lead to much greater business value.
It’s not a problem, though, that can be solved by CIOs or CFOs in isolation.