Every day it seems I can barely move in the world of the Internet without another big blurb about how software and developers are changing the world. Can you humour me for a moment so that we can lance that particular boil?
For every Facebook or Twitter or whatever other clever doo-dad, there’s a BBC Digital Media Initiative or an Obamacare SNAFU. In fact, given the necessarily high rates of software startup attrition, for every success there are probably tens, if not hundreds or thousands, of failures at varying levels of cost.
“Ah, but” the software utopians will argue “those big failures are public sector. Governments are the man and can’t do software.”
But of course massive scale failures happen across public and private sectors – we just don’t get to hear about the private sector ones (unless they go spectacularly publicly wrong – hello RBS (I know, technically public sector, but you wouldn’t know to look at how they are managed or paid…)).
“Ah, but” the software utopians will counter “those big failures are as a result of people and politics.”
Yep. And the big successes are too. You can’t claim it’s software alone that saves the day in the ones that work, and that it’s these pesky non-silicon based processing units that cause all of the foul ups.
So, software is important. But so are people. And it’s not one or the other that will lead to success – it’s a combination of both, either through planning or (with things like Facebook and Twitter) through serendipity. And they can combine either wilfully or through fate to make the world a more dreadful place too.
The theory behind the management of traditional IT within organisations has got all of this for many years, although practice has sometimes struggled to keep up with the text books. Business-aligned IT (or some such buzzword drivel) had been the aspiration for many in the industry until the fast expansion of computing and digital technologies that we saw first with the World Wide Web, and then more recently with smart mobile devices and the rise of social networks. We have gone in short steps from the most powerful computing devices people used being in their workplaces, to being in their homes (with the advent of broadband) to being in their pockets. But it’s not just about “power” in a raw computing sense – it’s far more about the ability for people to do things that are valuable and meaningful. Traditional corporate systems feel dis-empowering in comparison, no matter what the underlying computer devices may be.
As I wrote a few days ago, it seems that the world of “traditional” IT is one where systems are used to scale repetitive tasks and reduce short term risk. This happens either through systemization of previously human tasks (often resulting in a reducing headcount), or through the implementation of “best practice”.
The former tends to fall short because we humans are emotional creatures, not necessarily as predictable as the macro-scale analysis of economics might lead us to believe. The latter falls short because “best practice” ways of doing things are the outcome of an organisational and human learning process. Lifting up the way that one group of people operate, and then dumping it down, out of context, onto another group and expecting the same outcomes is optimistic at best, ignores the importance of learning along the way. It also often leads to subversion because imposed working practices are never truly “owned” by those expected to perform (as anyone who has studied coaching approaches will know).
Even worse, this production-line mentality about human interaction leads to systems that are remarkably fragile. They work fine for as long as nothing unexpected happens, but as soon as something unusual occurs, “the computer says no” (and the dehumanised humans around the system generally give up any remaining responsibility).
As a result, in most organisations, you find evidence of illicit, “shadow”, and quite anti-fragile systems where often the real work gets done. Take the Enterprise Resource Planning (ERP – think Oracle, SAP and so on) out of the company, and it will probably keep running for some time as the real business logic happens in Excel. The rise of consumer software services online to share information, communicate, collaborate and manage our networks has resulted in an increasing amount of “shadow” technology at the core of most businesses.
Traditional IT management approaches struggle at this point, because the technology that is important isn’t under the control of IT. There aren’t many markets where the news that customers had jumped ship to a competitor’s service would be greeted with the traditional supplier blaming the customer.
So where that leaves many organisations is a technology management culture based on maintaining status quo, mandating to buy rather than build services (something that I myself have taken as a truth for many years) whilst resulting official services are glacial in their ability to react to change and dreadfully fragile to boot. Meanwhile, business continues with DropBox, LinkedIn, SnapChat and who knows what else becoming the core “systems” that many rely upon.
We now find ourselves in a world where it’s not developers whose skills are paramount, but technology architects. People who can make sense of the business and technology costs and benefits of taking different approaches to allow computing power to create new value.
Central to this is the ability to make decisions about whether to buy or build. The evolution of Cloud computing infrastructure in the past ten years has dramatically changed the economics of getting a software business off the ground. In the first dot com boom if you wanted to provide software to customers you either had to set up a distribution network selling things in boxes (a similar cost structure to the music industry in many ways), or had to invest hundreds of thousands if not millions in internet-connnected servers.
Today, as anyone spending time in the bubble of London’s East End can testify, all you need is some time, some graphic design, a programmer and a half-arsed (at worst) idea. The massive capital costs of getting software out to be available to an audience have disappeared in the combination of commoditized Cloud infrastructure (Amazon in particular) and the app stores run by Apple and Google. You start cheap, and then you scale the technology underpinning your services as you need to (or don’t).
Big companies, however, struggle to take advantage of these same trends because years of IT management theory tell us it’s risky. It’s risky to put things in the Cloud. It’s risky to build things that aren’t to final scale from day one. It’s risky to build things when there is probably a best practice equivalent available off the virtual shelf.
But when would you want to decide when to take advantage and build things, versus buying someone elses product? This is where all business people will need to make architectural decisions going forward, because it in essence becomes a series of business, not technology, judgements.
For example, the system to control the heating in your building. Buy it. It’s not something that you want to be farting about with. Well, unless you run, say, data centres which kick out vast amounts of heat and need cooling, all of which is energy cost (which is a very, if not the most, significant cost line). Or if you run a shopping centre, in which case the same rules might apply. There again, you might want to outsource all of your heating issues to a third party service provider in their entirety.
Your sales processing system? That’s one for Oracle or SAP, surely? Well, it depends on how elastic your product inventory is. If you are selling physical widgets from a warehouse or shop, you’re probably right to buy. But if you have something more ephemeral like intellectual property rights or advertising space, your ability to hack around at your product inventory might be intertwined with your ability to hack about with your sales processing system. But for heaven’s sake don’t forget that you’ll also be hacking around at your sales people’s reward and bonusing structures too…
There is a massive risk to all of this – that building more leads us to where (it appears) many of the banks have got themselves to with very outdated central batch processing systems at their core, unable to cope with the real-time online banking services that have been hacked around on the outside (the stem of the recent RBS issues, according to this analysis). This is sometimes referred to as technical debt.
There is also a expectation issue here too: hacking about with things doesn’t deliver perfect systems. But then gilding the lily to produce technical “perfection” doesn’t necessarily lead to perfect systems either – it’s just that engineering and it’s pursuit of zero ambiguity has sold that myth for a very long time.
When it comes to getting stuff done, we tend toward the imperfect, anti-fragile systems which is why Excel is so important if unacknowledged. Thinking about where pooling software development with architectural and business expertise rather than just outsourcing the problem needs to change. But it involves the technology, the software, and the people for success to be realised through anything other than fluke – and more people who aren’t software people are going to need to step up to that challenge.