Achievements this week included:
- shaping of a couple client events coming up in the next month – one retailer, one marketer
- some positive reflections on The Consumer at Work event…
- … and finding some folk interested in taking the format idea further
- good catch up with former Imagination colleagues
- exploration of plan B (whilst plans C, D and E appear to emerge)
- oh, and did I mention the Wembley thing? (ticket bought, seat booked)
Next week: countdown to the playoffs…
There is a fairly old analogy about Cloud Computing being a bit like the emergence of the electrical grid at the turn of the last century. The short version is that factories used to be located near to sources of power – rivers for waterwheels, windy places for windmills, or where supply of fuel like coal could be easily transported to power steam engines (like next to a canal or a railway). Then companies like Westinghouse developed electric grids, and factories could not only be placed just about anywhere, but the local capabilities required to tend to the power supply mechanism suddenly became obsolete.
Similarly with Cloud computing, so runs the analogy, computing power becomes supplied from the grid (read Internet) and enables organisations to focus on what they do, rather than the boxes that run their software.
There are, as with most analogies, a few flaws, but overall I quite like it. It serves its purpose of getting some of the key Cloud concepts across.
What has struck me the in past view days, though, is that (extending out the analogy slightly) we are still to see the equivalent unit of measure in Cloud computing to the kilowatt-hour – that is a standard way to compare relative pricing of services. Today there are many ways in which Cloud infrastructure services are charged, but many revolved around the concept of a virtual machine: it’s a little like signing up for electricity on the basis of committing to virtual steam engines.
There are a whole stack of reasons why this is the case, some technical, some financial. Overall, though, the power analogy’s biggest weakness is that lack of standard unit of measure…
A couple of interesting articles have caught my eye in the last 24 hours: a reflective interview with Nick Carr 10 years after his bombshell HBR article IT doesn’t matter, and a piece on ZDNet capturing the CIO zeitgeist on what’s concerning them today.
My take on Carr’s original article, written on the cusp of the start of the Cloud computing revolution, roughly boiled down to “managing boxes in server rooms preoccupies most IT departments and that isn’t of any real value in the future”. I personally don’t think any of that has changed.
IT as a function is caught between a rock and a hard place: it wants to be seen as a crucial part of a business’ strategic outlook, but is left tending risks, data, compliance. Look at the ZDNet article: how much of the top list is focused on limiting and controlling people from doing things, and how many of the latter list are really strategic? I’m sorry, but if stopping IT being regarded as a commodity is regarded as being a core part of any CIO’s strategy, then they really need to look up the meaning of the word. (OK – so it was the journalist who made those categorizations – but nonetheless…)
Imagine if you were setting up a business today. Where would “set up the IT department” come in your plans? With a completely green field site, my hunch is that today you just wouldn’t do it.
Green fields are a luxury that most organisations don’t have. But are the decades of cumulative IT wisdom and practice now fit for purpose? IT is important to an organisation; so is facilities management; so is legal counsel. Whilst much of what we have built up in as means of management of IT over the years is of great importance in running services, that’s increasingly Business As Usual. Maybe the bit of IT that wants to be strategic should think about where in organisations it should find a new home (if, indeed it hasn’t already)?
It was quite a weekend to be a Watford fan. I get to say that roughly every seven years.
Yesterday, after 180 minutes of hard-fought football, a startling 20 seconds at the end of the Watford-Leicester match at Vicarage Road resulted in the Hornets getting through to their third ever Championship play-off final. On Bank Holiday Monday we’ll be playing either Crystal Palace or Brighton for the right to become a member of the English Premier League.
The play-off final seems to have become the most valuable single game of football in the world. For the winners, I’ve seen estimates that the prize will this year will amount to some £120m of TV rights (and subsequent “parachute payments” if they then fall back a league at the end of the first in the top flight). When Watford last got promotion in 2006, the value of their year in the sun was expected to be in the order of £40m. The first time we went into the Premier league, in 1999, a mere £10m.
These numbers, showing an incredible rate of inflation in comparison to the general economy, really demonstrate how media owners are willing to pay vast amounts for compelling “TV moments” (or, it has to be said if we get promotion, possibly some fairly dull TV moments too). This, along with the expansion of ”event format” TV programmes (X Factor, Britain’s Got Talent and similar reality/talent shows) shows how reliant broadcasters have become on content that is time-critical to draw in audiences for advertisers. Time shifting, video on demand and similar mean that traditional linear TV programming just can’t attract the audiences unless there is a compelling event to tie people into a specific time of viewing.
This isn’t anything new – the Superbowl in the US for many years has been seen as the benchmark for high-priced TV advertising slots. It’s just that there are now so many alternatives for viewers’ eyeballs to the traditional TV channel.
As new media emerge, old media reshape themselves to become more relevant (or die out entirely). When the TV emerged in the 1950s as a mass medium, cinema went through a period of readjustment when it lost B-movies and newsreels. The future of live broadcast TV probably does lie in events – either manufactured just for TV or with external significance (where sport holds its power). Quite what this means for other forms of programming seems to be somewhat up in the air.
Achievements this week included:
- a successful event with software houses, enterprise customers and agencies at Modern Jago
- a fun conversation in the heart of start-up land with Shhmooze CEO @michellegallen
- a peer into the world of Geo with some time at a Bing Maps app event
Next week: a moment to reflect, before getting on with it again
An interesting theme has emerged in my conversations this week: that an impact of new forms of computing devices, particularly with touch screens, is that people’s fear of exploring in software applications seems to be diminishing.
In the world of WIMP, and desktop-based computing, the majority of people seem fearful of using devices. Many have had it drummed into them that doing the wrong thing in a piece of software will lead to catastrophe But on a touch screen smart device people seem less afraid of the implications of trying to find out what a piece of software can do. We are less afraid to explore.
There are possibly a stack of reasons for this: more natural ways we interact with a smart device without the intermediation of mice and keyboards; far less complex applications; and less process-centric and more people-centric design of apps in the newer world.
I had the pleasure of spending some time with a couple of guys from a software company that supplies the insurance trade on Wednesday. We were talking about how they might ‘appify’ their products, and one of their conclusions was that they maybe had to look to produce dozens of apps, each focused down on limited sets of functionality relevant to a particular subset of the people who would use their products.
This has big implications for the way in which such companies might design their services: much of the traditional desktop software world is based on a model of including everything and then users find their own paths to the functionality they need; apps based on specific personae start with providing the bare minimum.
If an underlying system architecture has been well designed (some have, many haven’t) then becomes a manageable but complex design issue; if it hasn’t there are really big questions raised for plotting a path into this new world. Unpacking the “with the kitchen sink” model of applications of old into the new world of smart devices is going to provide some fascinating design and architectural challenges, particularly for established software companies who will otherwise face strong challenge from young upstarts who design from the start for the contemporary world of user experience.