Most frameworks are forward-looking. SCR structures what’s happening and what to do. Issue trees map where to look. The 2x2 matrix helps you choose between options. Hypothesis-driven thinking tests whether your guess is right.
The pre-mortem does something completely different. It assumes you’ve already made the decision. It assumes you moved forward. And then it assumes you failed. The question is: “We did it. It’s twelve months later. It failed catastrophically. Why?”
This framework is specifically designed to map risks proactively, before any failure occurs, by imagining a future catastrophe and working backward. It acts as a “prospective postmortem,” conducted at the project’s start or when a plan is concrete, to surface blind spots, optimism bias, and hidden vulnerabilities that teams overlook in forward planning.
That’s the exercise: Project yourself into a future where the decision was made and everything went wrong. Then work backward to identify why it went wrong. Not what might go wrong, but what DID go wrong, past tense, as if it already happened.
The shift from “might” to “did” is the key move. When you ask people “what could go wrong?” they self-censor. They don’t want to seem negative or paranoid. They offer safe, obvious risks. But when you said “it failed, tell me why,” something unlocks. People give you the real fears. The ones they’ve been holding privately. The ugly scenarios they didn’t want to voice because it would sound disloyal or pessimistic.
Who created it? American research psychologist, Gary Klein. He developed the pre-mortem as part of his work on naturalistic decision making. He noticed that most risk analysis happens in a way that’s psychologically biased toward optimism: once a team committed to a plan, they unconsciously filter out disconfirming information. The pre-mortem breaks that dynamic by making failure the starting assumption rather than an uncomfortable possibility. The 2007 HBR piece by Klein made the pre-mortem widely known.
Why does it matter? When an executive has made a decision and is about to move, she doesn’t need more encouragement. What she needs is someone who can safely surface risks she’s not seeing, without being perceived as negative, disloyal, or obstructive. The pre-mortem gives you a structure for doing exactly that. You’re not saying “I think this will fail.” You’re saying “Let’ s assume it failed. What would we wish we’d see in advance?” That framing changes the emotional dynamic entirely.
How organizations use it. The military uses it in mission planning, before an operation, the team imagines the mission failed and identifies the most likely causes. NASA uses it in engineering reviews. Consulting firms use it before presenting final recommendations to clients: “If the client implements this and it doesn’t work, what did we miss?” For example, hospitals use it in surgical planning.
The format is usually a tension. The leader says: “Imagine it’s one year from now. We implemented this plan and it was a disaster. Take two minutes and write down every reason you can think of for why it failed.” Everyone writes independently. This prevents groupthink. Then the reasons are shared, discussed, and the most critical ones addressed before moving forward.
Variations:
- Severity weighted pre-mortem After generating failure scenarios, you rank them on two dimensions: likelihood of occurring and severity of impact if they do. This connects directly to the risk-impact 2x2 matrix. High-likelihood, high-severity failures get immediate mitigation. Low-likelihood, low-severity ones get noted and monitored.
- Time-phased pre-mortem. Instead of one generic failure scenario, you imagine failure at different time horizons. What goes wrong in the first month? The first quarter? The first year? This surfaces different types of risk: early failures tend to be execution problems, later failures tend to be strategic misreads.
- Stakeholder pre-mortem. You run the exercise from the perspective of different stakeholders. How does this fail from the customer’s perspective? From the team’s perspective? From a competitor’s perspective? From a regulator’s perspective? Each lens surfaces different risks.
- Pre-mortem plus pre-parade. After the pre-mortem, you flip it: “It succeeded beyond our expectations. Why?” This surfaces the critical success factors: the things that must go right. You then compare the failure list and the success list to find the high-leverage factors that appear on both.
How to go about it:
- Step one: Set the frame. The leader or the thinking partner, says something very specific to the group. Not “what could go wrong” but rather: “Imagine we are eighteen months in the future. We implemented this plan. It was a disaster. It failed completely. Now, each of you, independently, write down every reason you can think of for why it failed.” The past tense matters. “It failed” not “it might fail.” This is what unlocks the honest answers. People stop self-censoring because they’re not predicting failure, but explaining it. It’s psychologically safer to diagnose a hypothetical past failure than to predict a real future one.
- Step two: Independent generation. Everyone write alone. No discussions or sharing. Usually two to five minutes of silent writing. This prevents groupthink: the loudest person or the most senior person doesn’t shape what everyone else thinks. Each person generates their own failure scenarios from their own perspective and expertise. In Klein’s research, this independence is critical. When people generate risks in open discussion, they tend to converge quickly around the first few ideas mentioned. When they write independently first, the range of failure scenarios is dramatically wider.
- Step three: Round-robin sharing. Each person shares one failure scenario at a time, going around the room. Not all at once, but one per round. This ensures every person’s thinking gets heard, not just the most articulate or most senior. The facilitator captures each one visibly, on a whiteboard, a shared document, whatever the team can all see. No evaluation during this phase. No “that’s unlikely” or “we’ve already thought of that.” Just collection. The goal is to get everything on the surface before any filtering begins.
- Step 4: Clustering and deduplication. Once all scenarios are shared, you group similar ones together. “Key engineer quits” and “integration team burns out” might cluster under “people and retention risk.” “Competitor launches similar product” and “cloud provider adds this feature natively” might cluster under “competitive displacement.” This step reveals which categories of risk are generating the most concern across the group. If seven out of ten people independently wrote something about talent retention, that’s a strong signal.
- Step 5: Prioritization. This is where judgment enters. Each failure scenario or cluster gets evaluated on two dimensions: how likely is this to actually happen, and how damaging would it be if it did. The high-likelihood, high-damage scenarios ar the ones that demand action. The low-likelihood, low-damage ones get noted and monitored but don’t change the plan. Some teams do this with a simple vote: each person gets three dots and places them on the scenarios they think are most critical. Some use the risk-impact matrix from the 2x2 framework. Some discuss and reach consensus. The method matters less than the discipline of choosing. Not everything is equally dangerous, and trying to mitigate twenty risks at once is the same as mitigating none.
- Step 6: Mitigation planning. For the top two or three failure scenarios, the team asks: what could we do now, before we execute the plan, to reduce the likelihood of this happening or reduce the damage if it does? These mitigations become part of the plan itself. They’re built into the decision. Sometimes a pre-mortem surfaces a risk so severe and so likely that it changes the decision entirely. That’s rare but it’s the most valuable possible outcome: discovering a fatal flaw before commitment rather than after.
- Step 7: Documentation. The full list of failure scenarios, the prioritization, and the mitigations get documented. Not just the top risks, but all of them. Because some scenarios that seem unlikely today might become relevant as circumstances change. Having the full pre-mortem on record means you can revisit it when new information arrives and ask: does this change which risks are most likely?
Common pitfalls:
- The biggest pitfall is treating it as a brainstorming exercise and stopping there. A list of twenty possible failures is not useful by itself. The discipline is triaging: where three or four failure modes are both likely enough and severe enough to require action now? The rest get logged and monitored but don’t derail the decision.
- Another pitfall is running it too early, before a real decision has been made. The pre-mortem only works when there’s a concrete plan on the table. If everything is still abstract, the failure scenarios are too vague to be useful.
- A third pitfall is using it to kill decisions rather than strengthen them. The pre-mortem is meant to identify the specific things you’d need to mitigate to make the decision work. If someone uses it to argue “see, there are too many risks, we shouldn’t do this,” they’ve missed the point. The question isn’t whether risks exist, but which one to address.
One more note. The pre-mortem works best when it happens after the team has committed to a plan but before execution begins. There’s a sweet spot: the decision is made, energy and optimism are high, and people feel safe enough to voice concerns because the plan is already approved. If you do it too early, there’s no concrete plan to stress-test. If you do it too late, concerns feel like sabotage.