Back to Blog
Insight Article

Resulting: Why a Perfect Process Can Lead to a Bad Outcome

How to judge decisions in a world of randomness

April 27, 20266 min read
Resulting: Why a Perfect Process Can Lead to a Bad Outcome cover

Why good decisions sometimes lose

Good decisions sometimes backfire and bad ones sometimes win. That mismatch—judging a decision by its outcome—is called resulting. In complex, noisy environments, even a near-perfect process can deliver a painful result through no fault of the decision-maker. This article explains why: fat tails, nonlinear feedback, path dependence, and the sticky narratives we build after the fact. More importantly, it shows how to separate process quality from outcome luck, design systems that survive bad rolls, and stay steady when progress arrives in lumpy bursts rather than straight lines.

Resulting, defined: The outcome is not the process

Resulting is the habit of grading a decision by how it turned out rather than by how it was made. It confuses luck with skill and erodes learning: we punish good processes that encountered bad variance and reward reckless gambles that happened to land heads.

To break the habit, freeze time at the moment of choice: What facts were knowable? Which uncertainties remained? What alternatives were considered? A sound process identifies base rates, considers plausible downside scenarios, and sets guardrails—before the coin is tossed.

When the world isn’t Gaussian

Many decision tools assume a tidy world of independent, modest shocks—the bell curve. In that world, extremes are vanishingly rare and outcomes cluster neatly around an average. But real life often breaks the two key assumptions: independence and small step sizes. When either fails, volatility scales, tails thicken, and “unthinkable” events happen surprisingly often.

Add a dash of nonlinear dynamics and things get wilder. In systems where tiny nudges can explode into outsized consequences, precise predictions degrade fast. Even a nearly flawless process can’t immunize you against a fat-tailed shock or a chaotic chain reaction.

Positive feedback, path dependence, and the unfairness of success

In many arenas, success begets success. When past wins increase the odds of future wins, small initial edges can snowball into dominance. That’s why equally skilled competitors can end up with wildly different outcomes: early luck hooks into feedback loops that widen the gap.

History also sticks. Once certain habits, standards, or technologies gain traction, they lock in—not necessarily because they’re best, but because switching costs and network effects make them self-reinforcing. In such landscapes, outcome inequality is often more about path than inherent merit, and judging process quality from final standings is particularly misleading.

Why our stories fool us

We crave explanations, so we compress messy reality into neat narratives. That shrink-wrap feels satisfying but can be logically wrong, making embellished stories seem more plausible than simple statements. After outcomes arrive, we retrofit causes, convert ambiguity into certainty, and inflate our confidence.

More data doesn’t save us—often it hurts. Extra drips of information raise our confidence without improving accuracy, and modern tools can mine any dataset into a perfect-looking mirage. Regressions become Rorschach tests. The more elaborate the story, the harder it is to unlearn when the world refuses to cooperate.

Reflection

After a loss, what tidy story did you construct—and which facts did you ignore?

Scale matters: what looks like failure up close

Zoom in tightly on any high-variance process and you’ll mostly see noise, not signal. At short intervals, even a strategy with a strong edge will look like a coin flip. If you monitor outcomes too frequently, the pain from losses will outweigh the muted pleasure from gains, draining judgment and willpower.

Randomness also hides in plain sight. Perfectly random sequences still produce streaks and clusters, which we then misread as meaningful. If your process is sound, widen the lens and lengthen the evaluation window before you declare defeat.

The trader who tore up his performance review

A young trader built a portfolio that made money slowly and exploded only during rare market shocks. Year after year, his paper results looked unimpressive next to peers who chased steady but fragile gains. He knew his edge came from surviving long dry spells and harvesting occasional outsized paydays. So he structured his life to resist resulting: he picked clients who would judge him over very long horizons, and he kept his demeanor calm and self-assured when short-term numbers lagged.

When a supervisor handed him a standard evaluation—designed to reward smooth, short-term returns and penalize volatility—he quietly tore it up. For him, the grade made no sense because it measured the wrong thing. He had built a process explicitly to look mediocre most days and extraordinary on a few. His yardstick was survival, convexity, and adherence to risk limits, not month-to-month applause.

Years later, a single “tail” event validated the design, erasing a decade of underappreciation in a week. He hadn’t become smarter overnight; the world had finally delivered the scenario his process was built for. The lesson isn’t to wait for a windfall—it’s to choose the yardstick first, match it to your strategy’s real risk/return shape, and defend it against the gravitational pull of resulting.

When rules help—and when they backfire

Rules reduce decision costs and curb discretion, which is useful when trust is low or stakes are high. But rigid rules are rarely perfect. If they’re too harsh for real-world nuance, people will quietly route around them, reintroducing noise in the shadows. Standards—principles that guide judgment—demand more effort but can fit complex cases better when decision-makers are competent and careful.

Worse, the wrong rule can reset social expectations entirely. Introduce a market-style penalty into a domain governed by norms, and you may permanently displace guilt with price. Once that shift happens, even removing the rule won’t restore the original behavior. A process that looks tidy on paper can degrade the very system it aimed to improve.

Progress is lumpy—design your psyche accordingly

Human brains expect linear effort to produce linear progress. Many valuable pursuits don’t work that way. Learning, research, creative work, and entrepreneurship often feature long plateaus punctuated by sudden leaps. If you judge yourself by daily visible output, you will feel demoralized right before the breakthrough.

Protect motivation by committing to process goals you can control—focused hours, reps, experiments—while accepting that visible results will arrive in bursts. Nonlinear domains reward stamina and variance-tolerant designs more than they reward day-to-day smoothness.

Practical safeguards against resulting

Build decision logs that capture your information set, base rates, options, and risk limits before outcomes arrive. Define acceptable ranges, not single-point forecasts—averages can be lethal if tails matter. Use pre-set review intervals to avoid reacting to noise. Beware overfitting: if a beautiful story emerged only after the data, treat it as suspect.

When evaluating others, match metrics to the true shape of the strategy: some processes should look dull most days and brilliant on a few; others should be steady but modest. Reward adherence to guardrails, not just recent performance.

Action

Before judging or changing course, write a one-paragraph rationale with base rates, alternatives, and explicit error bars; revisit it only at pre-set intervals.

Key Takeaways

  • A good process can yield a bad outcome in fat-tailed, nonlinear, path-dependent systems.
  • Judge decisions by their ex-ante quality; separate process from luck to learn accurately.
  • Narratives and extra data inflate confidence without improving accuracy; beware post-hoc stories.
  • Zoom out: frequent evaluation magnifies noise and emotional pain, distorting judgment.
  • Choose rules when discretion risks are high; use standards when nuance and trust permit.
  • Expect lumpy progress; anchor motivation to controllable process goals.
  • Use decision logs, base rates, error bars, and scheduled reviews to resist resulting.
Reading time
6 min

Based on 220 wpm

Published
April 27, 2026

Fresh insight