I’ve been thinking lately about how we’re rather good at solving problems, but surprisingly bad at identifying what the problems actually are.

Take the double diamond process that every design consultant worth their salt will draw on a whiteboard: diverge to explore numerous solutions, converge to select the best one, diverge again to prototype, and converge to deliver. We can tackle any well-defined problem by applying an arsenal of techniques, including user research, ideation workshops, rapid prototyping, and A/B testing. We’ve industrialised solution-finding.

But here’s what’s been nagging at me: we seem to assume that the problems themselves are just sitting there, fully formed and waiting to be solved. As if they’ve fallen from the sky with a neat little brief attached.

When Innovation Gets Trapped

The trouble starts when genuinely transformative technology appears. Our first instinct – and I’ve observed this happen repeatedly – is to examine our existing list of problems and ask: “Right, which of these can this new thing solve?”

Take steam-diesel locomotives. These monstrosities emerged in the early 20th century when railway companies sought to transition away from coal but couldn’t bear the thought of dismantling all of their steam infrastructure. So some created hybrid engines that burned oil to create steam to drive pistons, and then, at certain speeds, drove the pistons directly. Technically clever, commercially doomed. They were solving the problem “how do we reduce coal dependency whilst keeping our steam knowledge relevant?” rather than asking “what could rail transport become if we started fresh?”

Or consider how we still design word processors. I’m writing this in a tool that’s fundamentally concerned with page breaks, margins, and font choices – all optimised for printing. When did you last print a document? I honestly can’t remember. Yet we’re still solving the problem of “how do we make better documents for paper” rather than “what should digital text actually look like when displayed on a screen?” (nb the answer is not “A PDF”).

Even Henry Ford’s apocryphal quote about faster horses (which he probably never said, but never mind) captures this perfectly. If you’d asked people what they wanted, they’d have requested improvements to existing transportation: faster horses, more comfortable carriages, better roads. They wouldn’t have said, “I’d like to sit in a metal box powered by controlled explosions.” The problem, as they understood it, was horse-related (and there were some very significant horse-related problems).

I’m witnessing this pattern unfold with AI right now. I was thinking about this recently while working on something I call the AI Play Matrix – a framework for various approaches to AI innovation.

Most organisations desperately want to live in what I call the “AI Plan” quadrant: known business problems with proven AI solutions. It feels safe, manageable, and fits their existing governance models.

But there’s barely anything in that quadrant yet for generative AI, which is where all the hype lies.

Meanwhile, there’s frantic activity in what I call “AI Follow” – AI capabilities looking for business problems that may or may not exist. “Let’s add AI to everything!” or “We need some generative AI because our competitors have it.” It’s the steam-diesel trap all over again: taking this remarkable new capability and immediately asking which of our existing problems it can solve.

Liberation Through Problem Discovery

Here’s what I think we need to improve on: being as intentional about identifying problems as we are about solving them.

When a new technology emerges – properly new, not just an incremental improvement – we should probably resist the urge to immediately map it onto our current problem set. Instead, we might ask: “What problems could this solve that we’re not even thinking about yet?”

This requires a particular kind of intellectual honesty. We must acknowledge that our current problem definitions are limited by what we believe is possible.

I suspect this is partly why the most interesting applications of new technologies often come from outsiders – people who don’t carry the baggage of knowing what the “real” problems are supposed to be. They’re not trying to make better solutions to established problems; they’re stumbling into entirely new problem spaces.

Maybe sometimes the old problems really are the right problems, and incremental innovation is exactly what’s needed. But I can’t shake the feeling that we’re leaving a lot of potential on the table by being so problem-conservative when technology gives us the chance to be problem-radical.

The questions that spring to mind for me are: how do we build organisational muscle for problem discovery that’s as sophisticated as our solution discovery? How do we create space for people to step back and ask not just “how might we solve this better?” but “what if this isn’t the right problem at all?”

The AI Play Matrix has provided me with part of the answer. The real problem discovery happens in what I call the bottom half – “AI Tinker” and “AI Adapt.” That’s where organisations are willing to say, “We don’t actually know what the problem is yet.” The Tinker quadrant, especially, is playing with new AI capabilities to see what opportunities emerge, without predetermined problems to solve.

The accompanying AI Play Skills provides some pointers as to what people need to be able to do to make this work: collaborate effectively, be unafraid to define experiments rather than chasing truths, and be able to build things from ideas to test them quickly.

Most individuals in organisations find this deeply uncomfortable because there’s no easily predetermined ROI. It feels unproductive. But it’s actually how you discover what’s possible that you couldn’t see before. It enables you to spot the new problems to be solved.

The Matrix and Play Skills essentially give organisations permission to operate in problem-discovery mode, which most business frameworks don’t. It legitimises the “we don’t know what we don’t know” space that’s essential for genuine innovation rather than just incremental improvement.

I suspect we need more frameworks like this. Tools that make it okay to spend time in the space where the problems aren’t fully defined yet. Because I still have a nagging suspicion that the next big breakthrough is probably hiding in a problem we haven’t yet thought to articulate.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.