decision-making

Scenario Planning

TL;DR for executives

The decision has a three-to-five year horizon and the forces that will determine whether it works, regulation, technology adoption, competitive moves, are things no one can predict. Scenario planning stops trying to forecast which future will arrive and instead prepares you for several simultaneously. We’re not talking about predictions. Scenario planning results into defining a strategy that works regardless of which future shows up, or a clear set of signals that tell you which one is emerging so you can adjust before it’s too late.

Most of frameworks deal with one version of reality. SCR compresses the current situation. Issue trees map a single problem space. The 2x2 matrix evaluates options against fixed criteria. Hypothesis-driven thinking tests one claim. Pre-mortem images one failure. Second-order effects traces chains from one decision.

Scenario planning is different. It asks: What if the world changes? Not “What should we do given how things are” but “What should we do given that we don’t know how things will be?” It forces you to think across multiple possible futures simultaneously and make decisions that are robust enough to work in more than one of them.

The structure. You identify the two biggest uncertainties facing a decision: the forces that will shape the future but that you cannot predict or control. You map them as axes (like a 2x2). The four quadrants become four distinct future worlds. Then you ask: What would we do in each world? And critically: Is there a move that works across all four worlds?

Who developed it: 

  • Scenario planning emerged in the military in the 1950s through the work of Herman Kahn at the RAND Corporation. It was later expanded into the business world at Royal Dutch Shell in the early 1970s by Pierre Wack.
  • Shell wanted to plan for oil price fluctuation but realized that traditional forecasting, predicting one future, was useless because the oil market was driven by geopolitical forces nobody could predict.
  • Wack’s insight was: stop trying to predict which future will happen. Instead, develop detailed narratives of several plausible futures and prepare for all of them. Shell built scenarios around the possibility of an oil embargo by OPEC nations. When the 1973 oil crisis actually happened, Shell was the only major oil company that had prepared for it. They went from one of the weaker players to one of the strongest in a few years, not because they predicted the crisis, but because they had already thought through what they’d do if it happened.
  • The method was later formalized by Peter Schwartz (who worked at Shell) in his book The Art of the Long View and it’s now used by governments, militaries, corporations, and NGOs worldwide.

Why it matters: 

  • An executive dealing with a major strategic decision is always facing uncertainty. Will regulation tighten or loosen? Will the market grow or contract? Will AI adoption accelerate or face backlash? She can’t know. And most advisors either pretend to know, making confident predictions that are often wrong; or throw up their hands, presenting so many variables that the executive is more confused than before.
  • Scenario planning gives you a third option: structured uncertainty. You say to her: “We can’t predict which future we’ll be in. But there are really only two or three forces that will determine it. Let’s map the possible worlds and find the moves that make sense regardless of which one arrives.”
  • That’s a profoundly different value proposition. You’re not claiming to see the future. You’re helping her prepare for multiple futures. That builds trust because it’s honest, and it produces better decisions because the strategy is robust rather than fragile.

How it connects with other frameworks: 

  • The 2x2 matrix is the backbone: two axes, four quadrants. But instead of mapping options against criteria, you’re mapping uncertainties against each other to create future worlds.
  • Second-order effects live inside each scenario: once you’ve described a future world, you can trace the chains of consequences within it.
  • Pre-mortem maximizes each scenario: for any strategy you’ve considering, you can run a pre-mortem within each world.
  • Hypothesis-driven thinking helps you monitor which scenario is actually emerging: you form indicators for each world and watch for them in real time.
  • SCR is how you communicate the output: here’s the situation (uncertainty), the complication (multiple possible futures), the resolution (a robust strategy that works across them).
  • This framework pulls everything together.

The key disciplines:

  • Choosing the right uncertainties for your axes is everything, just like the 2x2. But here the criteria are different. You’re not looking for what matters most. You’re looking for what’s most uncertain AND most impactful. A force that’s highly uncertain but doesn’t change the decision is irrelevant. A force that’s highly impactful but predictable isn’t an uncertainty: it’s a planning assumption. You need the forces that are both unpredictable and decision-altering.
  • Each scenario must be a plausible, internally consistent narrative, not just a label. For example, “high regulation” isn’t a scenario. “The EU expands the AI Act to cover all educational technology, requiring full algorithmic transparency and human oversight for any AI system that evaluates student performance, while simultaneously three major AI-education scandals erode public trust,” that’s a scenario. It’s a world you can inhabit mentally and make decisions within.
  • The scenarios are not predictions. You’re not saying these will happen. Instead, you admit that these scenarios could happen, and each one would require different strategic responses. The discipline is treating all four with equal seriousness, even the ones you think are unlikely.

Common pitfalls:

  1. Choosing uncertainties that aren’t truly uncertain. If everyone agrees AI adoption will increase, that’s not an uncertainty but a trend. Put it in all four scenarios as a constant and find the real uncertainties underneath it.
  2. Making one scenario the “good” one and the others “bad.” All four should contain both opportunities and threats. A world with heavy regulation might be bad for speed but good for companies that invested in compliance early. A world with no regulation might be good for speed but create a race to the bottom that erodes trust.
  3. Too many scenarios. Four is the standard for a reason. Fewer than four doesn't capture enough variation. More than four becomes unmanageable: the human mind can’t hold six or eight distinct worlds simultaneously. Two axes, four worlds. That’s the constraint.
  4. Scenarios that are too similar. If two of your quadrants produce basically the same strategic response, your axes aren’t creating enough differentiation. Choose different uncertainties.

How to go about it:

  • Step 1: Define the decision and the time horizon. Before anything else, you need clarity on what decision is being made and over what timeframe. Scenario planning for one-year decision is ver different from scenario planning for a ten-year decision. Shorter horizons have fewer uncertainties. Longer horizons have more room for the world to shift dramatically.
  • Step 2. List the driving forces. Before choosing your two uncertainty axes, you need to map the landscape of forces that will shape the decision’s outcome. These come in two categories:
    • Predetermined elements: Forces that are already in motion and unlikely to change regardless of what happens. Aging populations in Europe. Increasing healthcare costs. Digital health records becoming standard. The general trajectory of AI capability improving. These go into every scenario as constants.
    • Critical uncertainties: Forces that could go in meaningfully different directions and that would significantly change the strategic landscape. These are your candidates for the axes. Regulations could tighten or loosen. AI trust in healthcare could increase or collapse. Reimbursement model could shift toward or away from AI diagnostics. Hospital procurement could centralize or fragment.
    • The discipline is separating these two cleanly. If you put a predetermined element on your axis, your scenarios will feel artificial because everyone knows that force is going in one direction. If you miss a critical uncertainty, your scenarios will have a blind spot.
  • Step 3: Choose two uncertainties for your axes. From your list of critical uncertainties, pick the two that are most uncertain AND most impactful on the specific decision. Not the most interesting or dramatic. But the two where: you genuinely don’t know which direction they’ll go, AND the direction they go fundamentally changes what the CEO should do. Test your axes by asking: If I know the answer to these questions, do I know enough to make the strategic decision? If yes, your axes are right. If you’d still be lost, you're missing a more important uncertainty.
  • Step 4: Build the four worlds.
    • Each quadrant of your 2x2 is a future world. Give each one a name: not a label like “high regulation, high trust” but an evocative name that captures the feel of that world. “The Fortress” or “The Wild West” or “The Slow Burn.” Names make scenarios memorable and discussable.
    • For each world, write a short narrative. Two to four sentences. Describe what it feels like to operate in that world. What’s happened? What’s the dominant dynamic? Who’s winning and who’s losing? Make it vivid that the CEO can mentally step into it and ask “What would I do here?”
    • The test for a good scenario: it’s plausible, it’s internally consistent (the pieces fit together), it’s meaningfully different from the other three, and it contains both opportunities and threats. No scenario should be pure paradise or pure disaster.
  • Step 5: Test strategies against all four worlds. This is where the framework earns its value. Take the executive’s proposed strategy and ask: how does this play out in each world? In world A, does the strategy succeed? Why or why not? In World B? World C? World D? If the strategy works in all four, it’s robust. If it works in two and fails in two, it’s a gamble. If it only works in one, it’s fragile.
  • Step 6: Find the robust moves. A robust move is an action that makes sense regardless of which scenario unfolds. It might not be the optimal move in any single scenario, but it’s a good move in all of them. These are the safest strategic bets. Sometimes, no single robust move exists. The worlds are too divergent. In that case, the output is different: identify which scenario is most likely, make a primary bet, and build trigger points: specific observable signals that tell you which world is actually emerging so you can pivot early.
  • Step 7: Identify signposts. For each scenario, name two or three early indicators that would tell you this particular world is emerging. These become a monitoring system. You don’t wait five years to find out which future arrived. You watch for signals now. For example: “If the EU announces mandatory algorithmic transparency requirements for medical AI within the next eighteen months, we’re moving toward World A.” That’s a signpost. It tells you which future is becoming real while there’s still time to adjust.
  • Step 8: Communicate the output. The executive doesn’t need to see the full analysis. She needs three things: The four worlds named and briefly described so she has a shared vocabulary for discussing uncertainty, the robust moves she should make now regardless of which future arrive, and the signposts she should watch to know which world is emerging. That’s a one-page brief. Four named worlds. One or two robust moves. Three or four signposts. That’s the deliverable.

The overall sequence: Define the decision and time horizon. List driving forces, separating predetermined elements from critical uncertainties. Choose two uncertainty axes. Build four named, narrative worlds. Test the proposed strategy against all four. Find robust moves or identify primary bets with trigger points. Set signposts for monitoring. Communicate as a one-page brief.

Exercise

A European mid-size pharmaceutical company (400 employees, Munich) specializes in diagnostic tests for rare diseases. They sell physical diagnostic kits to hospitals and research labs. The CEO is considering moving from physical kits to AI-powered diagnostic software. This is a five-year decision. Identify two uncertainties, build four worlds, find a robust move.

Answer

  • Predetermined elements (go into all four scenarios as constants):
    • EU legislation on AI + data privacy for healthcare will keep evolving toward higher data privacy and transparency demands.
    • Increasing investment in AI-driven diagnostics and healthcare solutions.
    • Increasing digitization of healthcare, with hospitals upgrading infrastructure.
    • People will continue needing healthcare, especially with EU population aging.
    • Healthcare insurance companies will also evolve and adopt AI-related approaches.
    • More AI software for scanning, integration platforms, data reading, and cross-examination.
  • Critical uncertainties:
    • AI diagnostic accuracy and reliability are unknown. We don’t have extensive research on AI in healthcare environments. We might face accuracy issues we can’t envision today. Professionals’ and patients’ trust depends entirely on this.
    • The security and regulatory environment could go either way: manageable frameworks that encourage growth, or suffocating regulation that makes the business unviable.
  • Important insight (before the analysis):
    • The data that feeds AI diagnostics still has to come from somewhere. The AI software is an interpretational layer. We still need the physical infrastructure (the kits) to extract biological information from patients. This reframes the CEO’s decision from substitution (kits OR AI) to layering (kits AND AI).
  • The two axes:
    • X axis: AI diagnostic accuracy and trust (ranging from “proven highly accurate and trusted” to “unreliable, distrusted”)
    • Y axis: Security and regulatory environment (ranging from “manageable compliance, stable regulation” to “heavy regulation, escalating compliance costs”)
    • Why these axes: They are truly uncertain (nobody can predict either), independent of each other (you could have high accuracy with suffocating regulation, or low accuracy with permissive regulation), and they determine the strategic decision.
  • The four worlds:
    • Healthy Lungs, Easy Breathing (high accuracy + manageable regulation): People love AI diagnostics. Hospitals love it. Early detection statistics are improving across the EU. Margins are strong because regulation is well-adjusted to both protect everyone and encourage businesses to grow. But since entry barriers are lowering, competitors are flooding in. AI diagnostics is gradually becoming a commodity. We have to compete on pricing instead of innovation and differentiation.
    • Healthy Lungs, Obstructed Breathing (high accuracy + heavy regulation): Everyone wants AI diagnostics. The tools proved high reliability and accuracy, saving lives. Media is positive. However, the security and regulatory environment is almost suffocating. Margins are extremely low because most revenue goes into transparency, monitoring, and reporting.
    • One Collapsed Lung (low accuracy + manageable regulation): Accuracy issues are emerging. Cases of wrong AI diagnostics are increasing. Public distrust toward AI is growing. Hospitals avoid it to protect their reputation. And because security and regulatory frameworks aren’t tight, imposters and low-quality competitors flood the market, increasing failed diagnostics and bringing more bad reputation to the entire industry. The permissive environment doesn’t protect the industry. It accelerates its destruction.
    • Two Collapsed Lungs (low accuracy + heavy regulation): AI diagnostics has a bad reputation. Low accuracy led to fatalities that made headlines. Investigations reveal systematic AI-driven diagnostic failures across hospitals and countries. Media becomes sensationalist, packaging even unrelated cases as AI mistakes. Doctors and hospitals avoid it. And at the same time, security and regulatory frameworks are suffocating AI businesses with tight controls that get worse with each negative headline.
  • The move:
    • Keep the physical diagnostic kits business as the revenue and data foundation. Build AI diagnostics as an interpretational layer on top, not a replacement.
  • How it works in each world:
    • Easy Breathing: AI commoditizes. Competitors flood in. But we still have the kits business generating revenue and, critically, generating proprietary data that feeds our AI layer. Competitors who built AI-only don’t have their own data source. We do. The kits become our moat.
    • Obstructed Breathing: Regulation is suffocating. AI margins are thin. But kits revenue sustains the company while we absorb compliance costs on the AI side. We’re not betting the entire company on a product regulation might strangle.
    • One Collapsed Lung: AI trust is collapsing. Bad actors are everywhere. But we’re the company that kept its physical kits, that still has the trusted analog product, AND can offer AI as an optional enhancement rather than a replacement. When hospitals retreat from AI-only providers, they come to us because we offer both. Our credibility survives because we never abandoned the proven method.
    • Two Collapsed Lungs: Everything is against AI diagnostics. But our company still has a functioning business selling physical diagnostic kits for rare diseases. The AI investment is a loss, but it doesn’t kill the company because we didn’t bet everything on it. We survive to try again when conditions improve.