Our Story

The Mental Model Trap

Why Reading 100 Frameworks Hasn't Made Us Better Decision-Makers

By Harish Keswani · Founder, DecisionsMatter.ai

A founder I have advised for fifteen years called me on a Sunday evening. He had a written offer to sell his company on his desk. The deadline was Tuesday. He was fifty-three, financially comfortable, and had built the business from a single room over two decades. On the call, his voice had a quality I had heard many times before from clients in that particular moment — a tightness that was not fear exactly, and not excitement, but a kind of mental constriction. The shape of a mind that cannot access its own resources.

This is a person who has read Thinking, Fast and Slow. He follows Farnam Street. He listens to The Knowledge Project on his morning walks. In calm dinner-table conversation he can fluently discuss twelve mental models, reason from first principles, and recite Charlie Munger's latticework of disciplines. And on this Sunday evening, with the stakes entirely real and the clock genuinely ticking, none of it was available to him.

He was not unintelligent. He was not unprepared. He had done the reading. And yet, at the precise moment when everything he had read should have been most useful, it was nowhere to be found.

The library problem

The mental models community has done extraordinary work over the last two decades. Shane Parrish's three-volume Great Mental Models series is a genuine contribution to how serious thinkers approach consequential choices. Naval Ravikant's writing has brought frameworks like regret minimisation and leverage thinking into the mainstream conversation. Charlie Munger's original insistence on a multi-disciplinary latticework of models — from physics, biology, economics, psychology — laid the intellectual foundation the rest of us have built on. Annie Duke has translated decision theory from poker rooms into boardrooms. Daniel Kahneman's work on System 1 and System 2 reshaped how an entire generation thinks about thinking.

The library is richer than it has ever been. Any motivated reader today has access to frameworks and ideas that, even thirty years ago, were available only to professional philosophers, elite investors, and a small circle of interdisciplinary academics.

And yet something is missing. Smart, well-read, reflective people making decisions that their own reading would have told them to reconsider.

I have spent twenty-five years advising founders, executives, and mid-career professionals on high-stakes decisions. I have watched, over and over again, the same pattern: smart, well-read, reflective people making decisions that their own reading would have told them to reconsider. Not because they lacked the frameworks. Because they could not summon them at the moment that mattered.

This is not a criticism of the library. A library is a foundation. It was never meant to be a building.

Why models fail under pressure

The honest cognitive science here is uncomfortable, because it implicates the very thing the mental models movement is trying to improve — our thinking under pressure. When we are stressed, sleep-deprived, emotionally activated, or operating under time pressure — which is to say, exactly the conditions under which we most need a framework — our prefrontal cortex is least able to retrieve and apply one.

Kahneman called this the moment when System 1 hijacks System 2. You do not fail to know the right model. You fail to summon it. The model sits in your head, perfectly intact, perfectly understood, and perfectly unreachable.

I have seen this play out in three recognisable patterns over the years.

The founder selling under fear: a soft market, a slowing pipeline, and a bidder with an aggressive deadline. Fear narrows the option set to two — accept or refuse — and the richer question of whether either option is being framed correctly never enters the room. The founder has read about framing effects. The founder could explain the framing effect over coffee on a Saturday. On Tuesday at noon, as the deadline approaches, the framing effect is invisible to him.

The executive accepting a job under flattery: a prestigious recruiter, a generous offer, and the quiet thrill of being chosen. Social proof and status bias flood the decision so completely that the question of whether the actual job matches the executive's stated priorities never surfaces seriously. The executive has read Influence. The executive could lecture undergraduates on social proof. On the call with the recruiter, social proof is doing all the work and nobody is watching.

The parent making a school decision under anxiety: a perceived window closing, ambient pressure from other parents, and an imagined future that is almost entirely constructed from fear. Loss aversion drives a decision that, examined calmly six months later, was never about the school at all — it was about the parent's relationship with uncertainty. The parent has read about loss aversion. The parent could nod sagely as you described it to them. The parent's own loss aversion, in real time, was invisible to them.

The problem is that the library they have built in their heads has no retrieval system attached to it.

None of these people are deficient. They are ordinary humans operating under the conditions ordinary humans face. The problem is not them. The problem is that the library they have built in their heads has no retrieval system attached to it.

The missing layer — structured retrieval

Between knowing a mental model and using one lies a missing layer. I think of it as structured retrieval under pressure. It is the infrastructure that summons the right framework at the right moment, whether or not the person inside the decision has the cognitive bandwidth to summon it themselves.

Professions where decisions carry life-and-death or career-ending consequences figured this out decades ago. Pilots do not rely on their ability to remember procedures under the stress of an engine failure — they rely on a checklist their training requires them to execute, which makes memory under pressure unnecessary. Surgeons do not rely on their ability to recall the right sequence during a complication — they rely on protocols developed over years by their field. Investment committees do not rely on individual brilliance to avoid catastrophic bets — they rely on decision memos and pre-mortems that force the analysis to happen in writing, calmly, before the capital is committed.

These professions long ago accepted a truth that individuals making personal and business decisions still resist: intelligence and knowledge are not enough. High-stakes decisions require external scaffolding that makes the thinking happen whether or not the thinker is at their best.

Most individuals making consequential decisions — about careers, businesses, money, relationships, health, education — have no such scaffolding. They have a library and no librarian. They have the raw intellectual material, and no system that retrieves the right piece at the right moment.

What a decision scaffold looks like

A good decision scaffold — whether it lives in a notebook, in a conversation with a trusted advisor, or in software — shares a small number of recognisable properties.

It captures the decision in plain language before allowing analysis to begin. This single step catches more bad decisions than any subsequent step. Writing the decision out forces the decider to confront what is actually being decided, who is actually affected, and what is actually at stake. A surprising number of decisions dissolve at this step, because the written version reveals that the real decision is not the one the decider thought they were making.

It asks framing questions that surface what the decider has already assumed. Most bad decisions are not made at the point of choice. They are made at the point of framing — the assumptions the decider brings to the table before any analysis begins. A scaffold that asks, seriously, what have you assumed here that you have not examined? catches much of the damage before it is done.

It selects the right framework for the shape of this decision, not the decider's mood. An irreversible, high-stakes life decision demands regret minimisation or a ten-year thought experiment. A reversible, low-stakes business choice demands expected value and speed. The scaffold matches the tool to the work, rather than allowing the decider's current emotional state to dictate which framework gets applied.

It names the cognitive biases most likely to be active in this specific decision. Generic awareness of biases is, empirically, almost useless. Specific awareness — being told, in a particular situation, that confirmation bias and loss aversion are probably distorting your thinking right now — is far more protective. A scaffold that does this well gives the decider a fighting chance of seeing their own blind spots.

It forces a pre-mortem before commitment. This is perhaps the single highest-leverage component of any decision scaffold. Imagining that the decision has already failed, then working backwards to identify what caused the failure, surfaces risks that almost no amount of forward-looking analysis will catch.

And it produces a written record. Not for legal protection. For future self-knowledge. The decider who reviews, six or twelve months later, their own written reasoning at the moment of decision learns something about themselves that is almost impossible to learn any other way. This is how decision-making improves over a lifetime — slowly, through confrontation with one's own past reasoning.

AI now makes this scaffold accessible to individuals, at the moment of decision, at a cost that was previously the preserve of corporations and institutions.

None of this is new. Professional analysts have done all of it manually for decades, with notebooks and whiteboards and careful advisors. What is new is that AI now makes this scaffold accessible to individuals, at the moment of decision, at a cost that was previously the preserve of corporations and institutions.

Honest limitations

A decision scaffold cannot inject domain knowledge you lack. If you are deciding whether to have a risky surgery, the scaffold cannot replace a qualified physician. If you are deciding whether to invest in a complex financial instrument, the scaffold cannot replace a fiduciary advisor. A scaffold structures your thinking; it cannot become an expert on a subject you have not studied.

A decision scaffold cannot make a decision for you. It walks you through the questions. You answer them. The AI does not know things you have not told it. The AI does not detect lies you tell yourself that you refuse to examine. The AI does not override your autonomy, and should not try to.

A decision scaffold cannot compensate for a decider who is lying to the scaffold. The principle of garbage in, garbage out applies forcefully here. If you describe your situation in a way that omits the uncomfortable facts, the scaffold's output will reflect the omission.

These are not marketing concessions. They are real limits of the medium, and I think the Farnam Street audience in particular will have no patience for any product that fails to acknowledge them. The scaffold is a thinking partner, not an oracle. It is useful only in proportion to the honesty the decider brings to it.

An invitation, not a pitch

Here is what I actually want from anyone who reads this far.

Look at the last five significant decisions you have made. Career moves. Financial commitments. Relationship choices. Business pivots. Educational or health decisions for yourself or your family.

For each, ask yourself honestly: did you pause, before committing, and walk through a structured analysis? Or did you decide on the basis of feelings, urgency, social pressure, incomplete research, and a general sense of what felt right? Most of us — myself included, for most of my life — would answer honestly that we decided without scaffolding. Sometimes the decisions turned out well. Sometimes they did not. In almost every case, the quality of our thinking at the moment of decision was lower than it needed to be.

I have spent the last year building a tool that attempts to close exactly this gap. It is at decisionsmatter.ai. It is imperfect. It is early. I welcome thoughtful criticism — particularly from this community, which has shaped my thinking on this question more than any other.

But my larger point is not the tool. My larger point is that the mental models community has built an extraordinary library, and almost all of us are under-using it in the moments we need it most. Whether you use my tool, or build your own personal protocol, or work with a human advisor who does this for you — the missing layer is real, and addressing it is where the next decade of improvement in personal decision quality will come from.

· · ·

Postscript: the founder

The founder I began with did sell his company. Our Sunday evening call lasted about thirty minutes. I did not make the decision for him. I asked him the questions his own reading should have prompted him to ask, if he had been able to reach them. I walked him through a pre-mortem that surfaced two conditions he had not asked the buyer to agree to. He went back to the buyer on Monday morning, secured those conditions, and signed on Tuesday.

The decision did not change. The clarity with which it was made did. A year later, with the benefit of hindsight, he told me that the Sunday call was the single most valuable thirty minutes of the entire transaction — not because it changed the outcome, but because it allowed him to make the decision he was going to make anyway, from a place of structured thinking rather than structured panic.

That is the entire argument of this essay, in a single evening.

A library is not a habit. The gap between knowing and using is where most decision quality is won or lost. And the scaffolding that closes it is the quiet, undramatic, utterly under-appreciated centre of serious thinking about choice.

HK

Harish Keswani

Founder of DecisionsMatter.ai and Business Mojo. 25+ years advising founders and executives on high-stakes decisions. First to introduce the Fractional CMO concept in India.

Experience the scaffold for yourself

Try the tool that turns frameworks into structured thinking — at the moment you need it most.