In last week’s WB-40, guest Rufus Evison drew an interesting analogy between how LLMs work and Daniel Kahneman’s Fast and Slow thinking model. Rufus described how LLM responses are “fast”, almost instinctive based on past experiences and pattern matching, and the problem with them is that they need to be more “slow”, deliberative and logical.

It’s a neat analogy, and like most neat analogies, it’s probably wrong. But it’s wrong in a way that got me thinking about something much more fundamental: whether the arrival of LLMs forces us to reevaluate views about computers, data, and “truth” which have been held for the past sixty years (if not longer).

The Database Delusion

Imagine the early 1970s. Computing power was expensive, storage was precious, and every byte mattered. In that world of scarcity, we built relational databases with a very particular philosophy: one version of the truth, normalised data structures, and the promise that if we just got our schema right, we could model reality accurately and completely.

Edgar Codd’s relational model wasn’t just a technical specification – it was a worldview. Entities have fixed relationships. A customer is a customer, a product is a product, and the data about them can be cleanly separated, stored once, and referenced everywhere. Duplication was the enemy. Inconsistency was a bug to be fixed. The goal was a pure, mathematical representation of the world where every question had a single, correct answer.

This wasn’t unreasonable given the constraints. When you’re working with kilobytes of storage and processing power measured in operations per second rather than billions, you need rigid structures. You can’t afford the luxury of ambiguity.

But here’s the thing: we never really left that mindset behind, even as the constraints that created it began to evaporate. We still talk about “single sources of truth” and “master data management” as if reality could be tamed into neat, consistent tables.

The Multiplicity Problem

The trouble is that reality has never cooperated with our database schemas.

Take something as apparently simple as “customer.” In the marketing department, a customer is someone who’s bought something in the last 18 months. In sales, it’s anyone who’s ever expressed interest. In finance, it refers to an entity with an outstanding balance. In customer service, it’s whoever’s calling with a problem. In the data warehouse, it’s a unique identifier with a start date and a status flag.

Which definition is correct? All of them. None of them. The question itself is malformed.

Or consider “product.” Is it the SKU? The brand? The category? The experience? The outcome it delivers? The new thing that technology people are talking about that is to be managed and owned?

A product can simultaneously be a physical thing, a service offering, a line item in a contract, and a marketing concept. Each of these meanings exists in its own context, serves its own purpose, and resists reduction to a single, universal definition.

We’ve spent decades trying to resolve these multiplicities through ever more sophisticated data models, master data management systems, and ontologies. We’ve created data dictionaries, semantic layers, and knowledge graphs, all in service of the fantasy that we can arrive at a single, shared understanding of what things are and what they mean.

But we can’t, because meaning isn’t singular. It’s contextual, cultural, and constantly shifting. The same piece of information holds different meanings for different people at different times and for various purposes. This isn’t a failure of our data systems – it’s the fundamental nature of how humans work with information.

The Fast and Slow of It

Which brings us back to those LLMs and their allegedly problematic thinking style.

The criticism that LLMs are too much like System 1 thinking – fast, associative, pattern-matching – misses something important. Human System 1 thinking isn’t broken; it’s how we navigate most of our daily existence. We don’t carefully reason through every decision or methodically analyse every piece of information we encounter. We work with approximations, associations, and contextual hunches that are usually good enough.

More importantly, our “slow” System 2 thinking isn’t the dispassionate logic engine we like to pretend it is. It’s still grounded in culture, experience, and perspective. Two people can apply rigorous logical thinking to the same problem and arrive at completely different conclusions, not because one of them is wrong, but because they’re starting from different assumptions about what matters. There is even a school of thought that says that System 2 thinking is merely System 1 thinking with time to post-rationalise.

The relentless pursuit of System 2-style “correct” answers assumes there are correct answers to be found. But in the messy world of human meaning-making, correctness is often beside the point. What matters is usefulness, relevance, and fit for purpose.

Computers That Don’t Compute

This suggests something that feels almost heretical: LLMs aren’t broken computers that need fixing. They may indicate a fundamentally different approach to information processing altogether.

Traditional computers compute – they execute precise instructions to produce deterministic outputs. LLMs do something else entirely. They explore semantic spaces, surface associations, and generate possibilities. They’re less like calculators and more like conversation partners who’ve read everything but remember it all slightly differently.

This isn’t necessarily better or worse than traditional computing. It’s different. And that difference might be pointing us toward a more honest way of thinking about what we’re doing when we use computers to help us make sense of the world.

Instead of seeking a single, “true” answer, we may be moving toward systems that help us explore multiple perspectives on complex problems. Instead of eliminating ambiguity, we might be learning to work productively with it. Instead of computing definitive solutions, we might be navigating possibility spaces.

The Questions We’re Not Asking

All of which leads me to some uncomfortable questions that I’m not sure we’re ready to ask, let alone answer.

If meaning is contextual and multiple rather than fixed and singular, what does it mean to design information systems? Do we need architectures that accommodate ambiguity rather than eliminate it?

If the goal isn’t finding the right answer but exploring useful possibilities, how do we evaluate the success of our systems? What replaces accuracy when accuracy itself becomes a problematic concept?

And perhaps most fundamentally: if computers stop being primarily about computation and become primarily about meaning-making, what does that do to how we organise work, make decisions, and relate to information itself?

These aren’t just technical questions. They’re questions about how we think, how we know things, and how we navigate uncertainty. The arrival of LLMs might be forcing us to confront the reality that our sixty-year quest for clean data and single sources of truth was always a category error.

Are we ready to think differently about what computers are for, or will we spend the next decade trying to force these new tools back into the old patterns, wondering why they don’t quite fit?

I suspect the answer will tell us more about ourselves than about the technology.

One thought on “The Messy Truth About “Thinking” Machines

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.