The Evidence Was Right There. They Couldn't See It.

In 1847, a Hungarian physician named Ignaz Semmelweis was working as an assistant obstetrician at Vienna General Hospital, where he noticed something that should have changed medicine immediately.

The hospital's maternity ward was divided into two clinics. The first was staffed by medical doctors and students. The second was staffed by midwives. Mortality rates from childbed fever — a bacterial infection that killed women shortly after childbirth — were dramatically higher in the first clinic than in the second.

Semmelweis investigated. The difference, he concluded, was that doctors and medical students regularly moved between the autopsy room and the delivery ward without washing their hands. They were carrying what he called "cadaverous particles" — what we now understand as bacteria — from corpses to patients.

He introduced a chlorine handwashing protocol. Mortality in the first clinic fell to match the second almost immediately.

The evidence was unambiguous. The intervention worked. Women stopped dying.

The medical establishment rejected his findings.

Was the weak? No. was the protocol difficult? Again, no.

But the dominant framework for understanding disease — miasma theory, the idea that illness spread through bad air — couldn't accommodate what Semmelweis had found. His didn't fit the model. And worse, it implied that doctors themselves were causing deaths.

That implication was so threatening to professional identity that it was easier to dismiss the evidence than to update the protocol. Semmelweis published his major work in 1861. He died in 1865, largely unrecognized.

His protocol wasn't widely adopted until Louis Pasteur's germ theory provided a new mental model spacious enough to hold what Semmelweis had already proven.

The evidence had been there for nearly two decades. What was missing wasn't . It was a way of seeing that facilitated widespread acceptance.

Cognitive inertia resists change

is the tendency to keep using an existing mental model even when new information suggests it no longer fits.

It's different from habit, which is behavioral. Habit is the path you walk without thinking. is the map you keep reading even when the road has changed.

It's different from sunk cost thinking, which is about past investment preventing future change. isn't about what you've spent. It's about how you've learned to see.

The mental model that once worked becomes the lens through which all new information gets filtered — and information that doesn't fit the model tends to get dismissed, minimized, or reinterpreted until it does.

This is why people can be shown evidence that contradicts their existing framework and still not change. The problem isn't that they're ignoring the evidence. It's that the evidence is being processed through a model that can't accommodate it.

The tendency to reject information that contradicts established beliefs now has a name: the Semmelweis Reflex. Named after the man whose correct findings were dismissed because the people who needed to hear them couldn't update the way they were thinking.

Nokia's map

In 2007, Nokia controlled approximately 49% of the global smartphone market. The company had spent decades building a hardware-first business — extraordinarily good at manufacturing phones, distributing them globally, and understanding what consumers wanted from a device built around calling and texting.

Then Apple released the iPhone.

Nokia's initial response, documented across multiple accounts, was confidence bordering on dismissal. The iPhone was a touchscreen device without a physical keyboard, with limited battery life and an unproven software ecosystem.

From inside Nokia's mental model — where a phone was primarily a communication and hardware device — the iPhone didn't look threatening. It looked like a niche product for people who prioritized aesthetics over function.

The model was wrong about what a phone was becoming.

The iPhone wasn't competing as a better phone. It was establishing that the phone was becoming a computer — a platform for software, experiences, and an ecosystem of applications. Nokia's hardware expertise was largely irrelevant to that competition.

By 2011, Nokia's market share had fallen from 49% to 25%. That February, Nokia's new CEO Stephen Elop sent an internal memo to all staff that became known as the "Burning Platform" memo. In it, he wrote: "The first iPhone shipped in 2007, and we still don't have a product that is close to their experience."

Four years after the iPhone launched, Nokia was still years behind.

In 2013, Microsoft acquired Nokia's mobile phone business for $7.2 billion — a fraction of Nokia's peak valuation of $250 billion. The hardware model had been among the best in the world. It just couldn't see what was replacing it.

Why mental models resist updating

Mental models form because they work. Nokia's model worked for decades. The miasma theory of disease was the best available explanation for how illness spread for centuries.

The problem is that good models, applied long enough, create the conditions for their own failure. When a mental model works consistently, it stops being questioned. Evidence that confirms it gets noticed. Evidence that challenges it gets explained away. Gradually, the model becomes less a tool for understanding and more a filter for what counts as real.

This is what makes hard to address from the inside. By the time the model is visibly wrong, the people holding it have usually spent years successfully explaining away the early warning signs.

Cognitive inertia and product teams

isn't only a problem for phone manufacturers and nineteenth-century physicians. It shapes product decisions at every level.

The mental model most product teams carry is something like: more features equal more value. It's not stated explicitly. It shows up in how roadmaps get built, how success gets measured, and how requests from users get processed.

Microsoft's Zune is a small, clean example. When Microsoft set out to compete with the iPod in 2006, they built a device with more features — FM radio, WiFi sharing, a larger screen. The mental model was "match and exceed the feature list."

But Apple wasn't winning on hardware features. It was winning on the experience of buying, discovering, and organizing music through iTunes. Microsoft was reading a hardware map while Apple had moved to ecosystem territory. The Zune was discontinued in 2011.

The model worked well enough for long enough that it became invisible. What it obscures is that users aren't asking for features. They're asking for . The specific feature they name is their best guess at what would produce the outcome they need.

Teams that have built feature-first products for years find this reframe genuinely difficult because the existing model has answered enough questions convincingly enough that updating it feels unnecessary.

The update is uncomfortable for a reason

Updating a mental model is intellectually difficult. It can also be socially and professionally costly.

The physician who adopted Semmelweis's handwashing protocol was implicitly acknowledging that the previous approach was harming patients. The Nokia executive who conceded the iPhone represents a new category was acknowledging years of misread signals.

Mental models are professional commitments. Updating them means accounting for the decisions made under the old one. This is why persists even when the evidence is clear. Seeing it fully means accepting what it implies about everything that came before.

What breaks the inertia

Mental models update when the cost of not updating finally exceeds the cost of updating.

For Nokia, the Burning Platform memo was the public acknowledgment that the update had arrived years after it was needed. For Semmelweis's colleagues, the update came after his death, when germ theory gave the medical establishment a new framework that could finally hold what he'd found.

The useful version of this insight for product teams isn't "update your models faster" — that's easier said than done. It's to build regular pressure-testing into how decisions get made. Not just "does this fit what we already believe?" but "what would have to be true for our current model to be wrong?"

That question doesn't feel natural. Mental models are built precisely so you don't have to ask it at every turn.

But the teams and organizations that ask it regularly tend to notice when the map is drifting from the new reality  before the gap in the road becomes a cliff.

Was this page helpful?