A friend of mine is a copywriter. We recently messaged about AI and what it means for her work, and I suggested that her job wasn’t really about writing words. She was, understandably, a little prickly about this.
Her response, once the initial reaction passed, was this:
“I suppose my initial reaction is to feel prickly about writing words being immaterial, but you have a point. It’s not the writing per se. It’s choosing which words, working out the order in which they should be presented in order to have maximum impact and seeing the patterns in words that have, since time immemorial, persuaded people to know/feel/do something. Communicating, in a nutshell.”
That reframing, from “producing words” to “communicating”, is exactly the kind of thinking that most of us struggle to do about our own work. And in a period where AI is generating enormous anxiety about jobs, it might be the most useful thinking we can do.
First, a word about the noise
Before getting into what any of this means for your own work, it’s worth pausing on the context in which you’re probably reading this. There is a lot of noise at the moment about AI and jobs. Some of it is a genuine signal. Some of it isn’t.
A number of companies have recently announced headcount reductions and, proudly in some cases, attributed them to AI. It’s worth being sceptical about this. We are in a difficult economic climate, in which many organisations find themselves carrying more people than their finances comfortably support. “We are restructuring because of AI” is a considerably more market-friendly statement than “we hired too many people and can no longer afford them.” Both might be true at the same time, but the AI framing tends to dominate the headline.
This isn’t to say that AI isn’t having a real effect on employment. It is, and it will continue to. But if you or someone you know has recently lost a job, the honest question is whether AI was genuinely the cause or just the available narrative. The anger, if there is any, deserves to be placed accurately.
We’ve been here before
The fear that technology is about to make human work redundant is not new. It surfaces with every significant technological shift, and it has a consistent track record of being both partly right and largely wrong.
The Luddites of the 1810s are routinely misrepresented as being simply anti-technology. They weren’t. They were skilled craftsmen whose specific craft, hand-loom weaving, was becoming economically redundant. The machines didn’t do what they did; they made what they did unnecessary. In the 1930s, Keynes coined the phrase “technological unemployment” and predicted that productivity gains would create a leisure problem by 2030. He wasn’t entirely wrong about productivity; he was wrong about who would benefit from it.
But here’s what I think the standard telling of this story gets wrong. We tend to frame it as “technology taking jobs.” The more accurate framing is that technologies become obsolete, and the jobs built around them go with them.
Consider the horse economy of the nineteenth century. The arrival of the motor car didn’t produce a robot horse – it rendered the horse redundant as a primary means of transport. And with it went an entire ecosystem of work: farriers, ostlers, coachmen, feed merchants, the infrastructure of coaching inns. Those jobs didn’t get automated. The thing they existed to serve simply stopped being needed.
The question isn’t always “will a machine do my job?” — it’s sometimes “is the thing my job exists to serve the dominant platform?”
The assembler programmer who sneered at BASIC
I’ve been having a lot of conversations with software developers about AI recently. They span a wide spectrum of views. Some are experimenting, adapting, and finding the tools genuinely useful. Some don’t have the time to, or are working in environments where the tools are banned. Some, however, are just sneering.
The sneer tends to take a particular form: “A machine couldn’t write really good code.” Which may or may not be true, depending on what you mean by ‘good,’ what you mean by ‘code,’ and what the baseline for comparison is. But it’s worth noting that this argument has a history.
The progression from machine code to assembler to early languages like FORTRAN and COBOL to the object-oriented languages that followed was, at each stage, met with resistance from people who felt the new abstraction layer was somehow less real, less rigorous, than what they knew. I’m quite sure there were assembler programmers who sneered at BASIC as a betrayal of something essential about how computers actually worked.
AI-assisted coding, in this framing, is the next step in a long series of abstractions. Whether that’s good or bad for the craft of programming is a reasonable debate. Whether it makes the sneer a reliable guide to what happens next is a different question.
The confirmation bias here is specific and worth naming: we notice when the AI produces bad code. We are less attentive to our own bad code. We reach a comfortable conclusion.
The method is not the function
What the copywriter conversation illustrates is a tendency that goes well beyond copywriting. When asked what we do, most of us describe the method rather than the function. We say “I write code,” or “I manage projects,” or “I produce reports.” These are descriptions of activities, not of value.
The problem with describing methods is that they change. The function of what the work is ultimately for tends to be more durable. A project manager whose function is “creating the conditions for complex work to succeed” has a much more resilient self-description than one whose function is “maintaining a RAID log and chairing a weekly standup.”
This is also, I think, why people whose professional identity is tightly bound to a specific set of practices find technological change particularly threatening. It’s not just that their skills might become less valuable. It’s that the challenge cuts closer to how they understand themselves.
Three questions worth sitting with
None of this is to say that everything will be fine, or that no jobs will be lost, or that the disruption is being overstated. It isn’t. But the anxiety tends to be most useful when it prompts reflection rather than resistance.
So, three questions that seem worth asking about your own work:
What do you actually do? Not the activities, not the job title, not the tools. What function does your work serve, and for whom? If you strip away the method, what remains?
Is the platform your work serves still dominant? Are you, in some sense, tending a horse? Not because the work isn’t real, but because the thing it exists to serve might be shifting underneath it.
How might the new tools serve that underlying function? This is where it gets interesting. I’ve been using AI to build lightweight, disposable apps to facilitate in-person events. That’s not AI replacing what I do. Rather, it’s AI serving the function (helping groups of people think together) in ways that weren’t previously accessible to me. The method has changed. The function hasn’t.
My copywriter friend won’t be replaced by a language model. But a copywriter who thinks their job is producing words, and who doesn’t interrogate that assumption, might find themselves in a more precarious position than one who understands they’re in the business of communication.
The distinction, as it turns out, matters quite a lot.