decision-making

Pre-Mortem

TL;DR for executives

The decision is made, everyone’s aligned, execution is about to start. This is the moment to ask: assume it failed. Why? The pre-mortem reliably surfaces risks people are holding privately but not voicing, because explaining a hypothetical past failure is psychologically easier than predicting a real future one. If you run this before your next major initiative, you’ll hear things no one was going to tell you otherwise.

Most frameworks are forward-looking. SCR structures what’s happening and what to do. Issue trees map where to look. The 2x2 matrix helps you choose between options. Hypothesis-driven thinking tests whether your guess is right.

The pre-mortem does something completely different. It assumes you’ve already made the decision. It assumes you moved forward. And then it assumes you failed. The question is: “We did it. It’s twelve months later. It failed catastrophically. Why?”

This framework is specifically designed to map risks proactively, before any failure occurs, by imagining a future catastrophe and working backward. It acts as a “prospective postmortem,” conducted at the project’s start or when a plan is concrete, to surface blind spots, optimism bias, and hidden vulnerabilities that teams overlook in forward planning.

That’s the exercise: Project yourself into a future where the decision was made and everything went wrong. Then work backward to identify why it went wrong. Not what might go wrong, but what DID go wrong, past tense, as if it already happened.

The shift from “might” to “did” is the key move. When you ask people “what could go wrong?” they self-censor. They don’t want to seem negative or paranoid. They offer safe, obvious risks. But when you said “it failed, tell me why,” something unlocks. People give you the real fears. The ones they’ve been holding privately. The ugly scenarios they didn’t want to voice because it would sound disloyal or pessimistic.

Who created it? American research psychologist, Gary Klein. He developed the pre-mortem as part of his work on naturalistic decision making. He noticed that most risk analysis happens in a way that’s psychologically biased toward optimism: once a team committed to a plan, they unconsciously filter out disconfirming information. The pre-mortem breaks that dynamic by making failure the starting assumption rather than an uncomfortable possibility. The 2007 HBR piece by Klein made the pre-mortem widely known.

Why does it matter? When an executive has made a decision and is about to move, she doesn’t need more encouragement. What she needs is someone who can safely surface risks she’s not seeing, without being perceived as negative, disloyal, or obstructive. The pre-mortem gives you a structure for doing exactly that. You’re not saying “I think this will fail.” You’re saying “Let’ s assume it failed. What would we wish we’d see in advance?” That framing changes the emotional dynamic entirely.

How organizations use it. The military uses it in mission planning, before an operation, the team imagines the mission failed and identifies the most likely causes. NASA uses it in engineering reviews. Consulting firms use it before presenting final recommendations to clients: “If the client implements this and it doesn’t work, what did we miss?” For example, hospitals use it in surgical planning.

The format is usually a tension. The leader says: “Imagine it’s one year from now. We implemented this plan and it was a disaster. Take two minutes and write down every reason you can think of for why it failed.” Everyone writes independently. This prevents groupthink. Then the reasons are shared, discussed, and the most critical ones addressed before moving forward.

Variations: 

  1. Severity weighted pre-mortem After generating failure scenarios, you rank them on two dimensions: likelihood of occurring and severity of impact if they do. This connects directly to the risk-impact 2x2 matrix. High-likelihood, high-severity failures get immediate mitigation. Low-likelihood, low-severity ones get noted and monitored.
  2. Time-phased pre-mortem. Instead of one generic failure scenario, you imagine failure at different time horizons. What goes wrong in the first month? The first quarter? The first year? This surfaces different types of risk: early failures tend to be execution problems, later failures tend to be strategic misreads.
  3. Stakeholder pre-mortem. You run the exercise from the perspective of different stakeholders. How does this fail from the customer’s perspective? From the team’s perspective? From a competitor’s perspective? From a regulator’s perspective? Each lens surfaces different risks.
  4. Pre-mortem plus pre-parade. After the pre-mortem, you flip it: “It succeeded beyond our expectations. Why?” This surfaces the critical success factors: the things that must go right. You then compare the failure list and the success list to find the high-leverage factors that appear on both.

How to go about it:

  • Step one: Set the frame. The leader or the thinking partner, says something very specific to the group. Not “what could go wrong” but rather: “Imagine we are eighteen months in the future. We implemented this plan. It was a disaster. It failed completely. Now, each of you, independently, write down every reason you can think of for why it failed.” The past tense matters. “It failed” not “it might fail.” This is what unlocks the honest answers. People stop self-censoring because they’re not predicting failure, but explaining it. It’s psychologically safer to diagnose a hypothetical past failure than to predict a real future one.
  • Step two: Independent generation. Everyone write alone. No discussions or sharing. Usually two to five minutes of silent writing. This prevents groupthink: the loudest person or the most senior person doesn’t shape what everyone else thinks. Each person generates their own failure scenarios from their own perspective and expertise. In Klein’s research, this independence is critical. When people generate risks in open discussion, they tend to converge quickly around the first few ideas mentioned. When they write independently first, the range of failure scenarios is dramatically wider.
  • Step three: Round-robin sharing. Each person shares one failure scenario at a time, going around the room. Not all at once, but one per round. This ensures every person’s thinking gets heard, not just the most articulate or most senior. The facilitator captures each one visibly, on a whiteboard, a shared document, whatever the team can all see. No evaluation during this phase. No “that’s unlikely” or “we’ve already thought of that.” Just collection. The goal is to get everything on the surface before any filtering begins.
  • Step 4: Clustering and deduplication. Once all scenarios are shared, you group similar ones together. “Key engineer quits” and “integration team burns out” might cluster under “people and retention risk.” “Competitor launches similar product” and “cloud provider adds this feature natively” might cluster under “competitive displacement.” This step reveals which categories of risk are generating the most concern across the group. If seven out of ten people independently wrote something about talent retention, that’s a strong signal.
  • Step 5: Prioritization. This is where judgment enters. Each failure scenario or cluster gets evaluated on two dimensions: how likely is this to actually happen, and how damaging would it be if it did. The high-likelihood, high-damage scenarios ar the ones that demand action. The low-likelihood, low-damage ones get noted and monitored but don’t change the plan. Some teams do this with a simple vote: each person gets three dots and places them on the scenarios they think are most critical. Some use the risk-impact matrix from the 2x2 framework. Some discuss and reach consensus. The method matters less than the discipline of choosing. Not everything is equally dangerous, and trying to mitigate twenty risks at once is the same as mitigating none.
  • Step 6: Mitigation planning. For the top two or three failure scenarios, the team asks: what could we do now, before we execute the plan, to reduce the likelihood of this happening or reduce the damage if it does? These mitigations become part of the plan itself. They’re built into the decision. Sometimes a pre-mortem surfaces a risk so severe and so likely that it changes the decision entirely. That’s rare but it’s the most valuable possible outcome: discovering a fatal flaw before commitment rather than after.
  • Step 7: Documentation. The full list of failure scenarios, the prioritization, and the mitigations get documented. Not just the top risks, but all of them. Because some scenarios that seem unlikely today might become relevant as circumstances change. Having the full pre-mortem on record means you can revisit it when new information arrives and ask: does this change which risks are most likely?

Common pitfalls:

  1. The biggest pitfall is treating it as a brainstorming exercise and stopping there. A list of twenty possible failures is not useful by itself. The discipline is triaging: where three or four failure modes are both likely enough and severe enough to require action now? The rest get logged and monitored but don’t derail the decision.
  2. Another pitfall is running it too early, before a real decision has been made. The pre-mortem only works when there’s a concrete plan on the table. If everything is still abstract, the failure scenarios are too vague to be useful.
  3. A third pitfall is using it to kill decisions rather than strengthen them. The pre-mortem is meant to identify the specific things you’d need to mitigate to make the decision work. If someone uses it to argue “see, there are too many risks, we shouldn’t do this,” they’ve missed the point. The question isn’t whether risks exist, but which one to address.

One more note. The pre-mortem works best when it happens after the team has committed to a plan but before execution begins. There’s a sweet spot: the decision is made, energy and optimism are high, and people feel safe enough to voice concerns because the plan is already approved. If you do it too early, there’s no concrete plan to stress-test. If you do it too late, concerns feel like sabotage.

Exercise

Advise the CEO of a mid-size European cybersecurity firm (150 people, Amsterdam, selling endpoint protection to mid-market companies). She wants to expand into AI-powered threat detection. To make this happen, the CEO wants to acquire an AI security startup. She selected her acquisition target (proprietary models, good token economics, solid detection quality). The deal is moving forward. She says: “We’re signing in three weeks. Before I commit, I want to know what could kill us. Not the obvious stuff. The strategic risks.”

Answer

  • Failure scenarios:
    • A major security crisis involving AI. A worldwide or local security incident involving AI causes enterprises to cut back on AI tools, especially in security. The category itself becomes toxic.
    • EU AI legislation shifts. Regulatory changes or signals of changes make the AI layer built on proprietary models more expensive to operate. Compliance and monitoring costs escalate beyond projections.
    • The product looks good on paper but is full of bugs. The proprietary models and token costs look solid during due diligence, but integration reveals deep technical issues. Repairing them costs as much as building from scratch, destroying the acquisition’s value proposition.
    • The core engineering team leaves and builds a competing tool. The nucleus of the startup’s AI talent exits after the acquisition and uses their knowledge to compete against us directly. We lose the capability we paid for and gain a competitor who knows our product intimately.
    • Big platforms eat our market. Large proprietary model providers start offering endpoint security as part of their services, competing at a scale and price point we can’t match. Our niche disappears.
    • Token costs destroy margins in practice. The model is more expensive when deployed at scale than projections showed. We burn through capital and have nothing left for marketing and sales for the GTM motion.
    • GTM model mismatch. We apply traditional B2B SaaS GTM motions to an AI-layer product when AI adoption is shifting to “test the product right now” models. We fail to adapt our distribution to how enterprises actually adopt AI tools.
    • Enterprise trust lag. The enterprises we serve don’t trust AI for security yet. We have to spend significantly more money and time than planned on education and trust-building before they’ll adopt.
    • Internal communication breakdown. We fail to document and educate our own teams on the integration, how the new product works, and how to talk about it across different touchpoints. The organization doesn’t know how to sell or support what it just bought.
  • Prioritization:
    • Risk one: token costs destroy margins at scale. My reasoning: we might lose control over how the proprietary models behave once integrated into our system. There are always unknown factors when integrating two fundamentally different systems (AI vs. legacy). Losing control over token economy under real customer usage can tank the business overnight. A token economics failure means the business model itself doesn't work. You can fix marketing. You can’t fix “every customer interaction costs us more than we earn from it.”
    • Risk two: GTM model mismatch combined with enterprise trust lag. My reasoning: distribution is everything. Everyone is adding AI layers or building AI agents. B2B buyers are becoming more sophisticated. Selling online is becoming harder every day. If we don’t think this through, we might fail at finding customers even with a great product. If enterprises don’t trust AI security yet AND we’re selling it using traditional motions that don’t match how AI products get adopted, we’re pushing an unfamiliar product through the wrong channel to a skeptical buyer.
    • Why these two over the others: I chose GTM over internal communication as a matter of perceived importance for the CEO. A CEO will put much more emphasis on GTM than internal comms. I would rather lead with an area where she will act than offer something where there’s already massive inertia. The internal communication risk is real, but it’s held in reserve, not discarded.
  • Mitigation: 
    • For risk one: Build it into the deal terms. Run a 90-day integration pilot before full commitment. If token costs exceed a defined threshold during the pilot, we renegotiate or exit. This doesn’t eliminate the uncertainty. It protects our exposure to the uncertainty. We’re buying the option to walk away if reality doesn’t match projections.
    • For risk two: Before signing, sit down with the startup’s marketing team and co-founder. Understand how they approached GTM, what worked, what didn’t, who their competitors are, what they’re doing, what success looks like on the market. This conversation tells us about the people we’re acquiring (how they think, how honestly they assess their own performance, how strategic they are about distribution) at the same time as it tells us about the GTM landscape.