You’ve probably heard of survivorship bias. It’s the mistake of studying only winners: the companies that made it, the founders who broke through, while ignoring the far larger number who tried the same things and failed.

It’s a well-documented problem. But there’s a second mistake that follows directly from it, and yet people don’t seem to talk about it.

Let’s call it the Actionable Insight Fallacy.

Here’s how it works

Say you’ve decided to study what makes companies successful. You’ve already made the survivorship error by picking winners. But now you make a second, compounding mistake.

You look at everything those companies did: their leadership, their processes, their culture, their timing, their luck. And you ask: what can we learn from this? What can we actually use?

Some of what made those companies successful can be written down and quantified. Their hiring practices. Their decision-making processes. Their approach to customers. These things translate into frameworks, slides, and chapters.

But some of it resists that treatment entirely. The moment a particular technology matured. The competitor who stumbled at exactly the right time. The key relationship that resulted from an introduction at a conference, an event an employee nearly didn’t attend. The pandemic that made something suddenly relevant. You can describe those things, but you can’t turn them into a framework. You can’t sell them in a leadership programme. They don’t fit in a bullet point.

So they get quietly set aside.

What remains: the processes, the habits, the structures, gets packaged up and presented as the recipe for success.

The problem

The things you discarded weren’t irrelevant. In many cases, they were the most important factors of all. But you’ve built your recipe from the ingredients that were easiest to write down, not the ones that did the most work.

This is why so many business books, leadership programmes, and best-practice frameworks feel compelling in theory but disappointing in practice. They’re not lying, exactly. The winners really did do those things. But the full picture also included factors that couldn’t be bottled and sold, and that part never made it into the chapter.

The Actionable Insight Fallacy is the gap between what drove success and what got written down about it.

The research that explains why

This isn’t a new observation, but I don’t think it’s previously been named as a single, distinct error, one that operates separately from, and on top of, survivorship bias.

Researcher Chengwei Liu has spent years studying top performance across many sectors. His finding is striking: the most successful are not merely skilful, they are disproportionately lucky. Top performance contains so much luck that the lessons drawn from studying outliers are systematically misleading. His prescription is counterintuitive: learn from the second-best, not the best, precisely because their success contains more signal and less noise. Liu’s critique of blockbuster business books like In Search of Excellence and Good to Great runs along exactly these lines. They select for extreme outcomes, then reverse-engineer the practices, and present the result as a recipe. The Actionable Insight Fallacy names the specific step where that process goes wrong.

Philosopher C. Thi Nguyen, in his 2026 book The Score, explains the mechanism that makes the fallacy so persistent. He calls it value capture: the process by which a metric designed to approximate something that matters ends up replacing it entirely. We start measuring an indicator we can measure, and gradually forget that the indicator we measure was never the thing itself. In success stories, the replicable behaviours become the official explanation, not because they’re the whole story, but because they fit into a metric. The luck gets crowded out, not through dishonesty, but through the natural pressure to produce something usable.

Psychologist Richard Wiseman‘s research on luck adds a necessary nuance. Lucky people do have learnable habits: openness to new experiences, varied routines, a willingness to act on hunches, which increase the probability of fortunate events “occurring”. Luck is not entirely unteachable. But those habits only create the conditions for luck to operate. Whether the lucky event actually arrives is still outside anyone’s control. You can learn to make yourself a more likely target for luck, but you can’t learn to manufacture the specific stroke of fortune that turned one company’s story into a case study. That part never makes it into the framework.

Best practice versus good practice

The Actionable Insight Fallacy has a practical consequence that rarely gets named directly.

Good practice is something you earn. You try things, some work, some don’t, you adjust, and over time, you develop judgement grounded in your specific context. The learning is the point. The experience of acquiring it makes it reliable, because it stays calibrated to the conditions under which it was developed.

Best practice tries to skip that process. Someone else did the learning, the conclusions got codified, and you’re handed the output. The implicit promise is that you can bypass the messy, iterative work of figuring things out for yourself.

The Actionable Insight Fallacy explains why that promise is hollow. The codification process filters out the luck, the timing, and the context-specific factors. So what gets handed to you isn’t just someone else’s learning, it’s someone else’s learning with the most important variables already removed. You’re not even getting the full lesson, let alone the experience of acquiring it.

There’s a further problem. Best practice, having been abstracted and packaged for transfer, arrives stripped of the conditions under which it was developed. The recipient organisation often doesn’t know what they don’t know about when it applies and when it doesn’t. Good practice, being earned through iteration, tends to stay calibrated precisely because those conditions were part of the learning process.

Nguyen’s value capture concept applies here, too. Best practice doesn’t just skip the learning; it can actively suppress it. Once a framework arrives with institutional authority, the pressure to comply displaces the slower process of working things out for yourself. You stop asking whether it fits your situation and start asking whether you’re implementing it correctly. The question shifts from “what works here?” to “are we following the framework?” That’s a significant loss, and it often goes unnoticed until something goes wrong.

So what’s actually transferable?

If specific actions are unreliable because they were extracted from an unrepeatable context, and best practice frameworks compound that problem by removing the luck and circumstance that made those actions work, the obvious question is: what, if anything, can legitimately be learned from success?

The answer isn’t nothing. But it’s probably not what most leadership programmes are selling.

What survives the Actionable Insight Fallacy filter intact isn’t a list of things to do. It’s a set of capacities for operating in conditions where luck is a significant variable. Metaskills, rather than specific actions.

The difference matters. While a specific action can be extracted from a success story and applied mechanically, a metaskill requires internalisation and judgement. It can’t be followed like a checklist. Here are three examples of what this looks like in practice:

  • Learning the capacity to put yourself in unfamiliar rooms before you feel ready for them. Not “attend more conferences,” but the deeper habit of seeking proximity to people and situations where luck has somewhere to operate. Luck needs proximity. Most people wait until they feel qualified. The metaskill is going anyway.
  • The habit of making work visible before it’s finished. Not a specific platform or tactic, but the understanding that luck cannot find your work sitting in a draft folder. The unexpected call goes to the person who is visibly doing the thing.
  • The reflex to look at what you built on the way to a failure. When something doesn’t work, the standard response is to tidy up and move on. That’s the moment the Actionable Insight Fallacy gets committed at an individual level. What exists now that didn’t exist before? Sometimes it’s nothing. 

These aren’t a recipe. They won’t guarantee anything. But they’re transferable in a way that specific actions aren’t, precisely because they’re about developing judgement rather than following instructions. They’re also much harder to turn into a metric, which is exactly why they’re filtered out of best-practice frameworks. The Actionable Insight Fallacy doesn’t just remove luck from success stories; it also degrades the possibility of success. It removes anything that resists quantification, and metaskills sit in that category alongside luck.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.