Listening to a talk about generative “AI” last night, it dawned on me that there is a lot of conversation at the moment that appears to be conflating two very distinct issues, namely, the potential near emergence of “Artificial General Intelligence” and an apocalyptic scenario where we see humans losing control to machines.
The former point is much hyped at the moment with the emergence of tools like DALL-E and ChatGPT, but I see no real evidence that these generative applications are bringing us closer in any meaningful way to machines that have sentient. Yes they are clever technologies, but they are no more sentient than a lawnmower.
But the idea that we might lose control, that the Singularity will mark a point at which we are controlled by the machines… well, I’m not sure that we need Artificial General Intelligence for the machines to take over.
What was the first thing you did this morning when you awoke? I’d wager that for many of this post’s readers, the answer will be “I reached for my smartphone”.
How much is that under our control? How much is the technology controlling us?
Many of us are addicted. Many of us have lost control, or at very least handed over control (willingly or otherwise) to the technology.
Nearly a decade ago, Nir Eyal’s book Hooked: How to Build Habit-Forming Products described a series of design patterns that would take control away from us. Would make apps “sticky”, addictive even. But even before Eyal’s work, software had become habit-forming. The random payload of the email notification icon, something that Eyal’s book talks about, got us hooked on our BlackBerrys.
But really is this technology controlling us, or people controlling other people? People making conscious decisions to design technology in a way that would relinquish control from its users.
AI, generative or General, won’t take control. But the people making it might choose to take more control from us – because they’ve done so already in so many ways…