When did the world become app-centric?

Something has been nagging away at me for the past couple of weeks, other than the usual things around doing more exercise, eating less crap, spending more valuable time with the kids and the realisation that it’s now my age as well as my lack of innate ability that means I’ll never play for Watford (simple things…).

The thing that’s been nagging at me is that the world of software has shifted in the past five or so years, to one that is once again application- rather than activity- or document-centric, and I can’t pinpoint exactly why.

Let me explain.

Back in the days before windows (and Windows (TM)) to do anything with a computer involved firstly starting up the program that you wanted to use. In the really old days that involved either entering the program into the computer (punch cards, typing… depending on how far back you want to go), and then running it; in the days of MS-DOS it meant starting the program from a disk of some sort. Once the program was up and running you could do things, maybe then creating a document that could be saved back to disk to be reused at a later date. This was a program- or application-centric world. If one wanted to do something, then you needed to know (and start) the program in which to work before you could get on with anything.

When the WIMP (Windows, Icons, Mouse, Pointer) model for computer interaction emerged into the mainstream in the 1980s, one of the key differences was that it moved into a more document-centric world; for the first time really it was possible to get working by starting with the document rather than the application that created it. The operating system (Windows, MacOS etc) became a view into the world of what you had done through the documents that you had created.

I was reading Joel Runyon’s recent blog post about a chance encounter with Russell Kirsch, progenitor of digital photography and the US’s first programmable computer this morning. Kirsch, in that conversation, railed against tablet devices, and Apple in general, because he saw them trending towards becoming nothing but content consumption devices, unable to be programmed by the user. I remember, ironically, feeling exactly the same thing when, with the first WIMP device I used, the Atari ST, I found myself unable to control it in the way in which I had with what had come before (for me, a BBC Model B).

Far be it from me to disagree with someone with such computing eminence as Kirsch, but I’m going to anyway. It seems to me that whilst maybe the smart, touch-screen device world we are now in is more geared to consuming rather than producing content (app-centricity being part of that), the models that allow for people to create and distribute applications are now much easier than in the world that immediately proceeded. In my days of writing programs for the BBC B, a program consisted of one block of code that was written, saved and then executed. The world of the ST, MacOS or Windows saw a huge level of complexity around libraries and installers and all sorts that have been cleaned out again in the modern Apps world. Whilst you might not be able to program on the device itself (although of course that won’t be the case with many Windows 8 devices), the ability to programme seems to me to be much simpler again.

I don’t think I’ve ever bought the idea that everyone needs to be able to program to really use a computer, and that idea is surely now less potent than it ever has been. I’m also not bought on the idea that a content consumption device is in some way a less “useful” computer – our ability to gain instantaneous access to a world of information is surely the big revolution of the past 20 years?

I do, though, wonder if the swing between app- and document-centricity will swing back again in the near future with more focus on what people want to do rather than the app they wish to use to do it…

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.